diff --git a/vendor/autoload.php b/vendor/autoload.php
new file mode 100644
index 0000000..4b5f544
--- /dev/null
+++ b/vendor/autoload.php
@@ -0,0 +1,7 @@
+> ~/.phpenv/versions/$(phpenv version-name)/etc/php.ini; fi;'
+ - cp test_services.json.dist test_services.json
+ - composer self-update
+ - composer install --no-interaction --prefer-source --dev
+
+script: vendor/bin/phpunit
+
+matrix:
+ allow_failures:
+ - php: 5.6
+ fast_finish: true
diff --git a/vendor/aws/aws-sdk-php/CHANGELOG.md b/vendor/aws/aws-sdk-php/CHANGELOG.md
new file mode 100644
index 0000000..2463ea2
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/CHANGELOG.md
@@ -0,0 +1,602 @@
+CHANGELOG
+=========
+
+2.6.6 (2014-05-29)
+------------------
+
+* Added support for the [Desired Partition Count scaling
+ option](http://aws.amazon.com/releasenotes/2440176739861815) to the
+ CloudSearch client. Hebrew is also now a supported language.
+* Updated the STS service description to the latest version.
+* [Docs] Updated some of the documentation about credential profiles.
+* Fixed an issue with the regular expression in the `S3Client::isValidBucketName`
+ method. See #298.
+
+2.6.5 (2014-05-22)
+------------------
+
+* Added cross-region support for the Amazon EC2 CopySnapshot operation.
+* Added AWS Relational Database (RDS) support to the AWS OpsWorks client.
+* Added support for tagging environments to the AWS Elastic Beanstalk client.
+* Refactored the signature version 4 implementation to be able to pre-sign
+ most operations.
+
+2.6.4 (2014-05-20)
+------------------
+
+* Added support for lifecycles on versioning enabled buckets to the Amazon S3
+ client.
+* Fixed an Amazon S3 sync issue which resulted in unnecessary transfers when no
+ `$keyPrefix` argument was utilized.
+* Corrected the `CopySourceIfMatch` and `CopySourceIfNoneMatch` parameter for
+ Amazon S3 to not use a timestamp shape.
+* Corrected the sending of Amazon S3 PutBucketVersioning requests that utilize
+ the `MFADelete` parameter.
+
+2.6.3 (2014-05-14)
+------------------
+
+* Added the ability to modify Amazon SNS topic settings to the UpdateStack
+ operation of the AWS CloudFormation client.
+* Added support for the us-west-1, ap-southeast-2, and eu-west-1 regions to the
+ AWS CloudTrail client.
+* Removed no longer utilized AWS CloudTrail shapes from the model.
+
+2.6.2 (2014-05-06)
+------------------
+
+* Added support for Amazon SQS message attributes.
+* Fixed Amazon S3 multi-part uploads so that manually set ContentType values are not overwritten.
+* No longer recalculating file sizes when an Amazon S3 socket timeout occurs because this was causing issues with
+ multi-part uploads and it is very unlikely ever the culprit of a socket timeout.
+* Added better environment variable detection.
+
+2.6.1 (2014-04-25)
+------------------
+
+* Added support for the `~/.aws/credentials` INI file and credential profiles (via the `profile` option) as a safer
+ alternative to using explicit credentials with the `key` and `secret` options.
+* Added support for query filters and improved conditional expressions to the Amazon DynamoDB client.
+* Added support for the `ChefConfiguration` parameter to a few operations on the AWS OpsWorks Client.
+* Added support for Redis cache cluster snapshots to the Amazon ElastiCache client.
+* Added support for the `PlacementTenancy` parameter to the `CreateLaunchConfiguration` operation of the Auto Scaling
+ client.
+* Added support for the new R3 instance types to the Amazon EC2 client.
+* Added the `SpotInstanceRequestFulfilled` waiter to the Amazon EC2 client (see #241).
+* Improved the S3 Stream Wrapper by adding support for deleting pseudo directories (#264), updating error handling
+ (#276), and fixing `is_link()` for non-existent keys (#268).
+* Fixed #252 and updated the DynamoDB `WriteRequestBatch` abstraction to handle batches that were completely rejected
+ due to exceeding provisioned throughput.
+* Updated the SDK to support Guzzle 3.9.x
+
+2.6.0 (2014-03-25)
+------------------
+
+* [BC] Updated the Amazon CloudSearch client to use the new 2013-01-01 API version (see [their release
+ notes](http://aws.amazon.com/releasenotes/6125075708216342)). This API version of CloudSearch is significantly
+ different than the previous one, and is not backwards compatible. See the
+ [Upgrading Guide](https://github.com/aws/aws-sdk-php/blob/master/UPGRADING.md) for more details.
+* Added support for the VPC peering features to the Amazon EC2 client.
+* Updated the Amazon EC2 client to use the new 2014-02-01 API version.
+* Added support for [resize progress data and the Cluster Revision Number
+ parameter](http://aws.amazon.com/releasenotes/0485739709714318) to the Amazon Redshift client.
+* Added the `ap-northeast-1`, `ap-southeast-2`, and `sa-east-1` regions to the Amazon CloudSearch client.
+
+2.5.4 (2014-03-20)
+------------------
+
+* Added support for [access logs](http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/access-log-collection.html)
+ to the Elastic Load Balancing client.
+* Updated the Elastic Load Balancing client to the latest API version.
+* Added support for the `AWS_SECRET_ACCESS_KEY` environment variables.
+* Updated the Amazon CloudFront client to use the 2014-01-31 API version. See [their release
+ notes](http://aws.amazon.com/releasenotes/1900016175520505).
+* Updates the AWS OpsWorks client to the latest API version.
+* Amazon S3 Stream Wrapper now works correctly with pseudo folder keys created by the AWS Management Console.
+* Amazon S3 Stream Wrapper now implements `mkdir()` for nested folders similar to the AWS Management Console.
+* Addressed an issue with Amazon S3 presigned-URLs where X-Amz-* headers were not being added to the query string.
+* Addressed an issue with the Amazon S3 directory sync where paths that contained dot-segments were not properly.
+ resolved. Removing the dot segments consistently helps to ensure that files are uploaded to their intended.
+ destinations and that file key comparisons are accurately performed when determining which files to upload.
+
+2.5.3 (2014-02-27)
+------------------
+
+* Added support for HTTP and HTTPS string-match health checks and HTTPS health checks to the Amazon Route 53 client
+* Added support for the UPSERT action for the Amazon Route 53 ChangeResourceRecordSets operation
+* Added support for SerialNumber and TokenCode to the AssumeRole operation of the IAM Security Token Service (STS).
+* Added support for RequestInterval and FailureThreshold to the Amazon Route53 client.
+* Added support for smooth streaming to the Amazon CloudFront client.
+* Added the us-west-2, eu-west-1, ap-southeast-2, and ap-northeast-1 regions to the AWS Data Pipeline client.
+* Added iterators to the Amazon Kinesis client
+* Updated iterator configurations for all services to match our new iterator config spec (care was taken to continue
+ supporting manually-specified configurations in the old format to prevent BC)
+* Updated the Amazon EC2 model to include the latest updates and documentation. Removed deprecated license-related
+ operations (this is not considered a BC since we have confirmed that these operations are not used by customers)
+* Updated the Amazon Route 53 client to use the 2013-04-01 API version
+* Fixed several iterator configurations for various services to better support existing operations and parameters
+* Fixed an issue with the Amazon S3 client where an exception was thrown when trying to add a default Content-MD5
+ header to a request that uses a non-rewindable stream.
+* Updated the Amazon S3 PostObject class to work with CNAME style buckets.
+
+2.5.2 (2014-01-29)
+------------------
+
+* Added support for dead letter queues to Amazon SQS
+* Added support for the new M3 medium and large instance types to the Amazon EC2 client
+* Added support for using the `eu-west-1` and `us-west-2` regions to the Amazon SES client
+* Adding content-type guessing to the Amazon S3 stream wrapper (see #210)
+* Added an event to the Amazon S3 multipart upload helpers to allow granular customization of multipart uploads during
+ a sync (see #209)
+* Updated Signature V4 logic for Amazon S3 to throw an exception if you attempt to create a presigned URL that expires
+ later than a week (see #215)
+* Fixed the `downloadBucket` and `uploadDirectory` methods to support relative paths and better support
+ Windows (see #207)
+* Fixed issue #195 in the Amazon S3 multipart upload helpers to properly support additional parameters (see #211)
+* [Docs] Expanded examples in the [API reference](http://docs.aws.amazon.com/aws-sdk-php/latest/index.html) by default
+ so they don't get overlooked
+* [Docs] Moved the API reference links in the [service-specific user guide
+ pages](http://docs.aws.amazon.com/aws-sdk-php/guide/latest/index.html#service-specific-guides) to the bottom so
+ the page's content takes priority
+
+2.5.1 (2014-01-09)
+------------------
+
+* Added support for attaching existing Amazon EC2 instances to an Auto Scaling group to the Auto Scaling client
+* Added support for creating launch configurations from existing Amazon EC2 instances to the Auto Scaling client
+* Added support for describing Auto Scaling account limits to the Auto Scaling client
+* Added better support for block device mappings to the Amazon AutoScaling client when creating launch configurations
+* Added support for [ranged inventory retrieval](http://docs.aws.amazon.com/amazonglacier/latest/dev/api-initiate-job-post.html#api-initiate-job-post-vault-inventory-list-filtering)
+ to the Amazon Glacier client
+* [Docs] Updated and added a lot of content in the [User Guide](http://docs.aws.amazon.com/aws-sdk-php/guide/latest/index.html)
+* Fixed a bug where the `KinesisClient::getShardIterator()` method was not working properly
+* Fixed an issue with Amazon SimpleDB where the 'Value' attribute was marked as required on DeleteAttribute and BatchDeleteAttributes
+* Fixed an issue with the Amazon S3 stream wrapper where empty place holder keys were being marked as files instead of directories
+* Added the ability to specify a custom signature implementation using a string identifier (e.g., 'v4', 'v2', etc)
+
+2.5.0 (2013-12-20)
+------------------
+
+* Added support for the new **China (Beijing) Region** to various services. This region is currently in limited preview.
+ Please see for more information
+* Added support for different audio compression schemes to the Elastic Transcoder client (includes AAC-LC, HE-AAC,
+ and HE-AACv2)
+* Added support for preset and pipeline pagination to the Elastic Transcoder client. You can now view more than the
+ first 50 presets and pipelines with their corresponding list operations
+* Added support for [geo restriction](http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/WorkingWithDownloadDistributions.html#georestrictions)
+ to the Amazon CloudFront client
+* [SDK] Added Signature V4 support to the Amazon S3 and Amazon EC2 clients for the new China (Beijing) Region
+* [BC] Updated the AWS CloudTrail client to use their latest API changes due to early user feedback. Some parameters in
+ the `CreateTrail`, `UpdateTrail`, and `GetTrailStatus` have been deprecated and will be completely unavailable as
+ early as February 15th, 2014. Please see [this announcement on the CloudTrail
+ forum](https://forums.aws.amazon.com/ann.jspa?annID=2286). We are calling this out as a breaking change now to
+ encourage you to update your code at this time.
+* Updated the Amazon CloudFront client to use the 2013-11-11 API version
+* [BC] Updated the Amazon EC2 client to use the latest API. This resulted in a small change to a parameter in the
+ `RequestSpotInstances` operation. See [this commit](https://github.com/aws/aws-sdk-php/commit/36ae0f68d2a6dcc3bc28222f60ecb318449c4092#diff-bad2f6eac12565bb684f2015364c22bd)
+ for the change
+* [BC] Removed Signature V3 support (no longer needed) and refactored parts of the signature-related classes
+
+2.4.12 (2013-12-12)
+-------------------
+
+* Added support for **Amazon Kinesis**
+* Added the CloudTrail `LogRecordIterator`, `LogFileIterator`, and `LogFileReader` classes for reading log files
+ generated by the CloudTrail service
+* Added support for resource-level permissions to the AWS OpsWorks client
+* Added support for worker environment tiers to the AWS Elastic Beanstalk client
+* Added support for the new I2 instance types to the Amazon EC2 client
+* Added support for resource tagging to the Amazon Elastic MapReduce client
+* Added support for specifying a key encoding type to the Amazon S3 client
+* Added support for global secondary indexes to the Amazon DynamoDB client
+* Updated the Amazon ElastiCache client to use Signature Version 4
+* Fixed an issue in the waiter factory that caused an error when getting the factory for service clients without any
+ existing waiters
+* Fixed issue #187, where the DynamoDB Session Handler would fail to save the session if all the data is removed
+
+2.4.11 (2013-11-26)
+-------------------
+
+* Added support for copying DB snapshots from one AWS region to another to the Amazon RDS client
+* Added support for pagination of the `DescribeInstances` and `DescribeTags` operations to the Amazon EC2 client
+* Added support for the new C3 instance types and the g2.2xlarge instance type to the Amazon EC2 client
+* Added support for enabling *Single Root I/O Virtualization* (SR-IOV) support for the new C3 instance types to the
+ Amazon EC2 client
+* Updated the Amazon EC2 client to use the 2013-10-15 API version
+* Updated the Amazon RDS client to use the 2013-09-09 API version
+* Updated the Amazon CloudWatch client to use Signature Version 4
+
+2.4.10 (2013-11-14)
+-------------------
+
+* Added support for **AWS CloudTrail**
+* Added support for identity federation using SAML 2.0 to the AWS STS client
+* Added support for configuring SAML-compliant identity providers to the AWS IAM client
+* Added support for event notifications to the Amazon Redshift client
+* Added support for HSM storage for encryption keys to the Amazon Redshift client
+* Added support for encryption key rotation to the Amazon Redshift client
+* Added support for database audit logging to the Amazon Redshift client
+
+2.4.9 (2013-11-08)
+------------------
+
+* Added support for [cross-zone load balancing](http://aws.amazon.com/about-aws/whats-new/2013/11/06/elastic-load-balancing-adds-cross-zone-load-balancing/)
+ to the Elastic Load Balancing client.
+* Added support for a [new gateway configuration](http://aws.amazon.com/about-aws/whats-new/2013/11/05/aws-storage-gateway-announces-gateway-virtual-tape-library/),
+ Gateway-Virtual Tape Library, to the AWS Storage Gateway client.
+* Added support for stack policies to the the AWS CloudFormation client.
+* Fixed issue #176 where attempting to upload a direct to Amazon S3 using the `UploadBuilder` failed when using a custom
+ iterator that needs to be rewound.
+
+2.4.8 (2013-10-31)
+------------------
+
+* Updated the AWS Direct Connect client
+* Updated the Amazon Elastic MapReduce client to add support for new EMR APIs, termination of specific cluster
+ instances, and unlimited EMR steps.
+
+2.4.7 (2013-10-17)
+------------------
+
+* Added support for audio transcoding features to the Amazon Elastic Transcoder client
+* Added support for modifying Reserved Instances in a region to the Amazon EC2 client
+* Added support for new resource management features to the AWS OpsWorks client
+* Added support for additional HTTP methods to the Amazon CloudFront client
+* Added support for custom error page configuration to the Amazon CloudFront client
+* Added support for the public IP address association of instances in Auto Scaling group via the Auto Scaling client
+* Added support for tags and filters to various operations in the Amazon RDS client
+* Added the ability to easily specify event listeners on waiters
+* Added support for using the `ap-southeast-2` region to the Amazon Glacier client
+* Added support for using the `ap-southeast-1` and `ap-southeast-2` regions to the Amazon Redshift client
+* Updated the Amazon EC2 client to use the 2013-09-11 API version
+* Updated the Amazon CloudFront client to use the 2013-09-27 API version
+* Updated the AWS OpsWorks client to use the 2013-07-15 API version
+* Updated the Amazon CloudSearch client to use Signature Version 4
+* Fixed an issue with the Amazon S3 Client so that the top-level XML element of the `CompleteMultipartUpload` operation
+ is correctly sent as `CompleteMultipartUpload`
+* Fixed an issue with the Amazon S3 Client so that you can now disable bucket logging using with the `PutBucketLogging`
+ operation
+* Fixed an issue with the Amazon CloudFront so that query string parameters in pre-signed URLs are correctly URL-encoded
+* Fixed an issue with the Signature Version 4 implementation where headers with multiple values were sometimes sorted
+ and signed incorrectly
+
+2.4.6 (2013-09-12)
+------------------
+
+* Added support for modifying EC2 Reserved Instances to the Amazon EC2 client
+* Added support for VPC features to the AWS OpsWorks client
+* Updated the DynamoDB Session Handler to implement the SessionHandlerInterface of PHP 5.4 when available
+* Updated the SNS Message Validator to throw an exception, instead of an error, when the raw post data is invalid
+* Fixed an issue in the S3 signature which ensures that parameters are sorted correctly for signing
+* Fixed an issue in the S3 client where the Sydney region was not allowed as a `LocationConstraint` for the
+ `PutObject` operation
+
+2.4.5 (2013-09-04)
+------------------
+
+* Added support for replication groups to the Amazon ElastiCache client
+* Added support for using the `us-gov-west-1` region to the AWS CloudFormation client
+
+2.4.4 (2013-08-29)
+------------------
+
+* Added support for assigning a public IP address to an instance at launch to the Amazon EC2 client
+* Updated the Amazon EC2 client to use the 2013-07-15 API version
+* Updated the Amazon SWF client to sign requests with Signature V4
+* Updated the Instance Metadata client to allow for higher and more customizable connection timeouts
+* Fixed an issue with the SDK where XML map structures were not being serialized correctly in some cases
+* Fixed issue #136 where a few of the new Amazon SNS mobile push operations were not working properly
+* Fixed an issue where the AWS STS `AssumeRoleWithWebIdentity` operation was requiring credentials and a signature
+ unnecessarily
+* Fixed and issue with the `S3Client::uploadDirectory` method so that true key prefixes can be used
+* [Docs] Updated the API docs to include sample code for each operation that indicates the parameter structure
+* [Docs] Updated the API docs to include more information in the descriptions of operations and parameters
+* [Docs] Added a page about Iterators to the user guide
+
+2.4.3 (2013-08-12)
+------------------
+
+* Added support for mobile push notifications to the Amazon SNS client
+* Added support for progress reporting on snapshot restore operations to the the Amazon Redshift client
+* Updated the Amazon Elastic MapReduce client to use JSON serialization
+* Updated the Amazon Elastic MapReduce client to sign requests with Signature V4
+* Updated the SDK to throw `Aws\Common\Exception\TransferException` exceptions when a network error occurs instead of a
+ `Guzzle\Http\Exception\CurlException`. The TransferException class, however, extends from
+ `Guzzle\Http\Exception\CurlException`. You can continue to catch the Guzzle `CurlException` or catch
+ `Aws\Common\Exception\AwsExceptionInterface` to catch any exception that can be thrown by an AWS client
+* Fixed an issue with the Amazon S3 stream wrapper where trailing slashes were being added when listing directories
+
+2.4.2 (2013-07-25)
+------------------
+
+* Added support for cross-account snapshot access control to the Amazon Redshift client
+* Added support for decoding authorization messages to the AWS STS client
+* Added support for checking for required permissions via the `DryRun` parameter to the Amazon EC2 client
+* Added support for custom Amazon Machine Images (AMIs) and Chef 11 to the AWS OpsWorks client
+* Added an SDK compatibility test to allow users to quickly determine if their system meets the requirements of the SDK
+* Updated the Amazon EC2 client to use the 2013-06-15 API version
+* Fixed an unmarshalling error with the Amazon EC2 `CreateKeyPair` operation
+* Fixed an unmarshalling error with the Amazon S3 `ListMultipartUploads` operation
+* Fixed an issue with the Amazon S3 stream wrapper "x" fopen mode
+* Fixed an issue with `Aws\S3\S3Client::downloadBucket` by removing leading slashes from the passed `$keyPrefix` argument
+
+2.4.1 (2013-06-08)
+------------------
+
+* Added support for setting watermarks and max framerates to the Amazon Elastic Transcoder client
+* Added the `Aws\DynamoDb\Iterator\ItemIterator` class to make it easier to get items from the results of DynamoDB
+ operations in a simpler form
+* Added support for the `cr1.8xlarge` EC2 instance type. Use `Aws\Ec2\Enum\InstanceType::CR1_8XLARGE`
+* Added support for the suppression list SES mailbox simulator. Use `Aws\Ses\Enum\MailboxSimulator::SUPPRESSION_LIST`
+* [SDK] Fixed an issue with data formats throughout the SDK due to a regression. Dates are now sent over the wire with
+ the correct format. This issue affected the Amazon EC2, Amazon ElastiCache, AWS Elastic Beanstalk, Amazon EMR, and
+ Amazon RDS clients
+* Fixed an issue with the parameter serialization of the `ImportInstance` operation in the Amazon EC2 client
+* Fixed an issue with the Amazon S3 client where the `RoutingRules.Redirect.HostName` parameter of the
+ `PutBucketWebsite` operation was erroneously marked as required
+* Fixed an issue with the Amazon S3 client where the `DeleteObject` operation was missing parameters
+* Fixed an issue with the Amazon S3 client where the `Status` parameter of the `PutBucketVersioning` operation did not
+ properly support the "Suspended" value
+* Fixed an issue with the Amazon Glacier `UploadPartGenerator` class so that an exception is thrown if the provided body
+ to upload is less than 1 byte
+* Added MD5 validation to Amazon SQS ReceiveMessage operations
+
+2.4.0 (2013-06-18)
+------------------
+
+* [BC] Updated the Amazon CloudFront client to use the new 2013-05-12 API version which includes changes in how you
+ configure distributions. If you are not ready to upgrade to the new API, you can configure the SDK to use the previous
+ version of the API by setting the `version` option to `2012-05-05` when you instantiate the client (See
+ [`UPGRADING.md`](https://github.com/aws/aws-sdk-php/blob/master/UPGRADING.md))
+* Added abstractions for uploading a local directory to an Amazon S3 bucket (`$s3->uploadDirectory()`)
+* Added abstractions for downloading an Amazon S3 bucket to local directory (`$s3->downloadBucket()`)
+* Added an easy to way to delete objects from an Amazon S3 bucket that match a regular expression or key prefix
+* Added an easy to way to upload an object to Amazon S3 that automatically uses a multipart upload if the size of the
+ object exceeds a customizable threshold (`$s3->upload()`)
+* [SDK] Added facade classes for simple, static access to clients (e.g., `S3::putObject([...])`)
+* Added the `Aws\S3\S3Client::getObjectUrl` convenience method for getting the URL of an Amazon S3 object. This works
+ for both public and pre-signed URLs
+* Added support for using the `ap-northeast-1` region to the Amazon Redshift client
+* Added support for configuring custom SSL certificates to the Amazon CloudFront client via the `ViewerCertificate`
+ parameter
+* Added support for read replica status to the Amazon RDS client
+* Added "magic" access to iterators to make using iterators more convenient (e.g., `$s3->getListBucketsIterator()`)
+* Added the `waitUntilDBInstanceAvailable` and `waitUntilDBInstanceDeleted` waiters to the Amazon RDS client
+* Added the `createCredentials` method to the AWS STS client to make it easier to create a credentials object from the
+ results of an STS operation
+* Updated the Amazon RDS client to use the 2013-05-15 API version
+* Updated request retrying logic to automatically refresh expired credentials and retry with new ones
+* Updated the Amazon CloudFront client to sign requests with Signature V4
+* Updated the Amazon SNS client to sign requests with Signature V4, which enables larger payloads
+* Updated the S3 Stream Wrapper so that you can use stream resources in any S3 operation without having to manually
+ specify the `ContentLength` option
+* Fixed issue #94 so that the `Aws\S3\BucketStyleListener` is invoked on `command.after_prepare` and presigned URLs
+ are generated correctly from S3 commands
+* Fixed an issue so that creating presigned URLs using the Amazon S3 client now works with temporary credentials
+* Fixed an issue so that the `CORSRules.AllowedHeaders` parameter is now available when configuring CORS for Amazon S3
+* Set the Guzzle dependency to ~3.7.0
+
+2.3.4 (2013-05-30)
+------------------
+
+* Set the Guzzle dependency to ~3.6.0
+
+2.3.3 (2013-05-28)
+------------------
+
+* Added support for web identity federation in the AWS Security Token Service (STS) API
+* Fixed an issue with creating pre-signed Amazon CloudFront RTMP URLs
+* Fixed issue #85 to correct the parameter serialization of NetworkInterfaces within the Amazon EC2 RequestSpotInstances
+ operation
+
+2.3.2 (2013-05-15)
+------------------
+
+* Added support for doing parallel scans to the Amazon DynamoDB client
+* [OpsWorks] Added support for using Elastic Load Balancer to the AWS OpsWorks client
+* Added support for using EBS-backed instances to the AWS OpsWorks client along with some other minor updates
+* Added support for finer-grained error messages to the AWS Data Pipeline client and updated the service description
+* Added the ability to set the `key_pair_id` and `private_key` options at the time of signing a CloudFront URL instead
+ of when instantiating the client
+* Added a new [Zip Download](http://pear.amazonwebservices.com/get/aws.zip) for installing the SDK
+* Fixed the API version for the AWS Support client to be `2013-04-15`
+* Fixed issue #78 by implementing `Aws\S3\StreamWrapper::stream_cast()` for the S3 stream wrapper
+* Fixed issue #79 by updating the S3 `ClearBucket` object to work with the `ListObjects` operation
+* Fixed issue #80 where the `ETag` was incorrectly labeled as a header value instead of being in the XML body for
+ the S3 `CompleteMultipartUpload` operation response
+* Fixed an issue where the `setCredentials()` method did not properly update the `SignatureListener`
+* Updated the required version of Guzzle to `">=3.4.3,<4"` to support Guzzle 3.5 which provides the SDK with improved
+ memory management
+
+2.3.1 (2013-04-30)
+------------------
+
+* Added support for **AWS Support**
+* Added support for using the `eu-west-1` region to the Amazon Redshift client
+* Fixed an issue with the Amazon RDS client where the `DownloadDBLogFilePortion` operation was not being serialized
+ properly
+* Fixed an issue with the Amazon S3 client where the `PutObjectCopy` alias was interfering with the `CopyObject`
+ operation
+* Added the ability to manually set a Content-Length header when using the `PutObject` and `UploadPart` operations of
+ the Amazon S3 client
+* Fixed an issue where the Amazon S3 class was not throwing an exception for a non-followable 301 redirect response
+* Fixed an issue where `fflush()` was called during the shutdown process of the stream handler for read-only streams
+
+2.3.0 (2013-04-18)
+------------------
+
+* Added support for Local Secondary Indexes to the Amazon DynamoDB client
+* [BC] Updated the Amazon DynamoDB client to use the new 2012-08-10 API version which includes changes in how you
+ specify keys. If you are not ready to upgrade to the new API, you can configure the SDK to use the previous version of
+ the API by setting the `version` option to `2011-12-05` when you instantiate the client (See
+ [`UPGRADING.md`](https://github.com/aws/aws-sdk-php/blob/master/UPGRADING.md)).
+* Added an Amazon S3 stream wrapper that allows PHP native file functions to be used to interact with S3 buckets and
+ objects
+* Added support for automatically retrying *throttled* requests with exponential backoff to all service clients
+* Added a new config option (`version`) to client objects to specify the API version to use if multiple are supported
+* Added a new config option (`gc_operation_delay`) to the DynamoDB Session Handler to specify a delay between requests
+ to the service during garbage collection in order to help regulate the consumption of throughput
+* Added support for using the `us-west-2` region to the Amazon Redshift client
+* [Docs] Added a way to use marked integration test code as example code in the user guide and API docs
+* Updated the Amazon RDS client to sign requests with Signature V4
+* Updated the Amazon S3 client to automatically add the `Content-Type` to `PutObject` and other upload operations
+* Fixed an issue where service clients with a global endpoint could have their region for signing set incorrectly if a
+ region other than `us-east-1` was specified.
+* Fixed an issue where reused command objects appended duplicate content to the user agent string
+* [SDK] Fixed an issue in a few operations (including `SQS::receiveMessage`) where the `curl.options` could not be
+ modified
+* [Docs] Added key information to the DynamoDB service description to provide more accurate API docs for some operations
+* [Docs] Added a page about Waiters to the user guide
+* [Docs] Added a page about the DynamoDB Session Handler to the user guide
+* [Docs] Added a page about response Models to the user guide
+* Bumped the required version of Guzzle to ~3.4.1
+
+2.2.1 (2013-03-18)
+------------------
+
+* Added support for viewing and downloading DB log files to the Amazon RDS client
+* Added the ability to validate incoming Amazon SNS messages. See the `Aws\Sns\MessageValidator` namespace
+* Added the ability to easily change the credentials that a client is configured to use via `$client->setCredentials()`
+* Added the `client.region_changed` and `client.credentials_changed` events on the client that are triggered when the
+ `setRegion()` and `setCredentials()` methods are called, respectively
+* Added support for using the `ap-southeast-2` region with the Amazon ElastiCache client
+* Added support for using the `us-gov-west-1` region with the Amazon SWF client
+* Updated the Amazon RDS client to use the 2013-02-12 API version
+* Fixed an issue in the Amazon EC2 service description that was affecting the use of the new `ModifyVpcAttribute` and
+ `DescribeVpcAttribute` operations
+* Added `ObjectURL` to the output of an Amazon S3 PutObject operation so that you can more easily retrieve the URL of an
+ object after uploading
+* Added a `createPresignedUrl()` method to any command object created by the Amazon S3 client to more easily create
+ presigned URLs
+
+2.2.0 (2013-03-11)
+------------------
+
+* Added support for **Amazon Elastic MapReduce (Amazon EMR)**
+* Added support for **AWS Direct Connect**
+* Added support for **Amazon ElastiCache**
+* Added support for **AWS Storage Gateway**
+* Added support for **AWS Import/Export**
+* Added support for **AWS CloudFormation**
+* Added support for **Amazon CloudSearch**
+* Added support for [provisioned IOPS](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.ProvisionedIOPS.html)
+ to the the Amazon RDS client
+* Added support for promoting [read replicas](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html)
+ to the Amazon RDS client
+* Added support for [event notification subscriptions](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.html)
+ to the Amazon RDS client
+* Added support for enabling\disabling DNS Hostnames and DNS Resolution in Amazon VPC to the Amazon EC2 client
+* Added support for enumerating account attributes to the Amazon EC2 client
+* Added support for copying AMIs across regions to the Amazon EC2 client
+* Added the ability to get a Waiter object from a client using the `getWaiter()` method
+* [SDK] Added the ability to load credentials from environmental variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_KEY`.
+ This is compatible with AWS Elastic Beanstalk environment configurations
+* Added support for using the us-west-1, us-west-2, eu-west-1, and ap-southeast-1 regions with Amazon CloudSearch
+* Updated the Amazon RDS client to use the 2013-01-10 API version
+* Updated the Amazon EC2 client to use the 2013-02-01 API version
+* Added support for using SecurityToken with signature version 2 services
+* Added the client User-Agent header to exception messages for easier debugging
+* Added an easier way to disable operation parameter validation by setting `validation` to false when creating clients
+* Added the ability to disable the exponential backoff plugin
+* Added the ability to easily fetch the region name that a client is configured to use via `$client->getRegion()`
+* Added end-user guides available at http://docs.aws.amazon.com/aws-sdk-php/guide/latest/
+* Fixed issue #48 where signing Amazon S3 requests with null or empty metadata resulted in a signature error
+* Fixed issue #29 where Amazon S3 was intermittently closing a connection
+* Updated the Amazon S3 client to parse the AcceptRanges header for HeadObject and GetObject output
+* Updated the Amazon Glacier client to allow the `saveAs` parameter to be specified as an alias for `command.response_body`
+* Various performance improvements throughout the SDK
+* Removed endpoint providers and now placing service region information directly in service descriptions
+* Removed client resolvers when creating clients in a client's factory method (this should not have any impact to end users)
+
+2.1.2 (2013-02-18)
+------------------
+
+* Added support for **AWS OpsWorks**
+
+2.1.1 (2013-02-15)
+------------------
+
+* Added support for **Amazon Redshift**
+* Added support for **Amazon Simple Queue Service (Amazon SQS)**
+* Added support for **Amazon Simple Notification Service (Amazon SNS)**
+* Added support for **Amazon Simple Email Service (Amazon SES)**
+* Added support for **Auto Scaling**
+* Added support for **Amazon CloudWatch**
+* Added support for **Amazon Simple Workflow Service (Amazon SWF)**
+* Added support for **Amazon Relational Database Service (Amazon RDS)**
+* Added support for health checks and failover in Amazon Route 53
+* Updated the Amazon Route 53 client to use the 2012-12-12 API version
+* Updated `AbstractWaiter` to dispatch `waiter.before_attempt` and `waiter.before_wait` events
+* Updated `CallableWaiter` to allow for an array of context data to be passed to the callable
+* Fixed issue #29 so that the stat cache is cleared before performing multipart uploads
+* Fixed issue #38 so that Amazon CloudFront URLs are signed properly
+* Fixed an issue with Amazon S3 website redirects
+* Fixed a URL encoding inconsistency with Amazon S3 and pre-signed URLs
+* Fixed issue #42 to eliminate cURL error 65 for JSON services
+* Set Guzzle dependency to [~3.2.0](https://github.com/guzzle/guzzle/blob/master/CHANGELOG.md#320-2013-02-14)
+* Minimum version of PHP is now 5.3.3
+
+2.1.0 (2013-01-28)
+------------------
+
+* Waiters now require an associative array as input for the underlying operation performed by a waiter. See
+ `UPGRADING.md` for details.
+* Added support for **Amazon Elastic Compute Cloud (Amazon EC2)**
+* Added support for **Amazon Elastic Transcoder**
+* Added support for **Amazon SimpleDB**
+* Added support for **Elastic Load Balancing**
+* Added support for **AWS Elastic Beanstalk**
+* Added support for **AWS Identity and Access Management (IAM)**
+* Added support for Amazon S3 website redirection rules
+* Added support for the `RetrieveByteRange` parameter of the `InitiateJob` operation in Amazon Glacier
+* Added support for Signature Version 2
+* Clients now gain more information from service descriptions rather than client factory methods
+* Service descriptions are now versioned for clients
+* Fixed an issue where Amazon S3 did not use "restore" as a signable resource
+* Fixed an issue with Amazon S3 where `x-amz-meta-*` headers were not properly added with the CopyObject operation
+* Fixed an issue where the Amazon Glacier client was not using the correct User-Agent header
+* Fixed issue #13 in which constants defined by referencing other constants caused errors with early versions of PHP 5.3
+
+2.0.3 (2012-12-20)
+------------------
+
+* Added support for **AWS Data Pipeline**
+* Added support for **Amazon Route 53**
+* Fixed an issue with the Amazon S3 client where object keys with slashes were causing errors
+* Added a `SaveAs` parameter to the Amazon S3 `GetObject` operation to allow saving the object directly to a file
+* Refactored iterators to remove code duplication and ease creation of future iterators
+
+2.0.2 (2012-12-10)
+------------------
+
+* Fixed an issue with the Amazon S3 client where non-DNS compatible buckets that was previously causing a signature
+ mismatch error
+* Fixed an issue with the service description for the Amazon S3 `UploadPart` operation so that it works correctly
+* Fixed an issue with the Amazon S3 service description dealing with `response-*` query parameters of `GetObject`
+* Fixed an issue with the Amazon S3 client where object keys prefixed by the bucket name were being treated incorrectly
+* Fixed an issue with `Aws\S3\Model\MultipartUpload\ParallelTransfer` class
+* Added support for the `AssumeRole` operation for AWS STS
+* Added a the `UploadBodyListener` which allows upload operations in Amazon S3 and Amazon Glacier to accept file handles
+ in the `Body` parameter and file paths in the `SourceFile` parameter
+* Added Content-Type guessing for uploads
+* Added new region endpoints, including sa-east-1 and us-gov-west-1 for Amazon DynamoDB
+* Added methods to `Aws\S3\Model\MultipartUpload\UploadBuilder` class to make setting ACL and Content-Type easier
+
+2.0.1 (2012-11-13)
+------------------
+
+* Fixed a signature issue encountered when a request to Amazon S3 is redirected
+* Added support for archiving Amazon S3 objects to Amazon Glacier
+* Added CRC32 validation of Amazon DynamoDB responses
+* Added ConsistentRead support to the `BatchGetItem` operation of Amazon DynamoDB
+* Added new region endpoints, including Sydney
+
+2.0.0 (2012-11-02)
+------------------
+
+* Initial release of the AWS SDK for PHP Version 2. See for more information.
+* Added support for **Amazon Simple Storage Service (Amazon S3)**
+* Added support for **Amazon DynamoDB**
+* Added support for **Amazon Glacier**
+* Added support for **Amazon CloudFront**
+* Added support for **AWS Security Token Service (AWS STS)**
diff --git a/vendor/aws/aws-sdk-php/CONTRIBUTING.md b/vendor/aws/aws-sdk-php/CONTRIBUTING.md
new file mode 100644
index 0000000..a2d1fab
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/CONTRIBUTING.md
@@ -0,0 +1,80 @@
+# Contributing to the AWS SDK for PHP
+
+We work hard to provide a high-quality and useful SDK, and we greatly value feedback and contributions from our
+community. Whether it's a new feature, correction, or additional documentation, we welcome your pull requests.
+Please submit any [issues][] or [pull requests][pull-requests] through GitHub.
+
+## What you should keep in mind
+
+1. The SDK is released under the [Apache license][license]. Any code you submit will be released under that license. For
+ substantial contributions, we may ask you to sign a [Contributor License Agreement (CLA)][cla].
+2. We follow the [PSR-0][], [PSR-1][], and [PSR-2][] recommendations from the [PHP Framework Interop Group][php-fig].
+ Please submit code that follows these standards. The [PHP CS Fixer][cs-fixer] tool can be helpful for formatting your
+ code.
+3. We maintain a high percentage of code coverage in our unit tests. If you make changes to the code, please add,
+ update, and/or remove unit (and integration) tests as appropriate.
+4. We may choose not to accept pull requests that change service descriptions (e.g., files like
+ `src/Aws/OpsWorks/Resources/opsworks-2013-02-18.php`). We generate these files based on our internal knowledge of
+ the AWS services. If there is something incorrect with or missing from a service description, it may be more
+ appropriate to [submit an issue][issues]. We *will*, however, consider pull requests affecting service descriptions,
+ if the changes are related to **Iterator** or **Waiter** configurations (e.g. [PR #84][pr-84]).
+5. If your code does not conform to the PSR standards or does not include adequate tests, we may ask you to update your
+ pull requests before we accept them. We also reserve the right to deny any pull requests that do not align with our
+ standards or goals.
+6. If you would like to implement support for a significant feature that is not yet available in the SDK, please talk to
+ us beforehand to avoid any duplication of effort.
+
+## What we are looking for
+
+We are open to anything that improves the SDK and doesn't unnecessarily cause backwards-incompatible changes. If you are
+unsure if your idea is something we would be open to, please ask us (open a ticket, send us an email, post on the
+forums, etc.) Specifically, here are a few things that we would appreciate help on:
+
+1. **Waiters** – Waiter configurations are located in the service descriptions. You can also create concrete waiters
+ within the `Aws\*\Waiter` namespace of a service if the logic of the waiter absolutely cannot be defined using waiter
+ configuration. There are many waiters that we currently provide, but many that we do not. Please let us know if you
+ have any questions about creating waiter configurations.
+2. **Docs** – Our [User Guide][user-guide] is an ongoing project, and we would greatly appreciate contributions. The
+ docs are written as a [Sphinx][] website using [reStructuredText][] (very similar to Markdown). The User Guide is
+ located in the `docs` directory of this repository. Please see the [User Guide README][docs-readme] for more
+ information about how to build the User Guide.
+3. **Tests** – We maintain high code coverage, but if there are any tests you feel are missing, please add them.
+4. **Convenience features** – Are there any features you feel would add value to the SDK (e.g., batching for SES, SNS
+ message verification, S3 stream wrapper, etc.)? Contributions in this area would be greatly appreciated.
+5. **Third-party modules** – We have modules published for [Silex](mod-silex), [Laravel 4](mod-laravel), and [Zend
+ Framework 2][mod-zf2]. Please let us know if you are interested in creating integrations with other frameworks. We
+ would be happy to help.
+6. If you have some other ideas, please let us know!
+
+## Running the unit tests
+
+The AWS SDK for PHP is unit tested using PHPUnit. You can run the unit tests of the SDK after copying
+phpunit.xml.dist to phpunit.xml:
+
+ cp phpunit.xml.dist phpunit.xml
+
+Next, you need to install the dependencies of the SDK using Composer:
+
+ composer.phar install
+
+Now you're ready to run the unit tests using PHPUnit:
+
+ vendor/bin/phpunit
+
+[issues]: https://github.com/aws/aws-sdk-php/issues
+[pull-requests]: https://github.com/aws/aws-sdk-php/pulls
+[license]: http://aws.amazon.com/apache2.0/
+[cla]: http://en.wikipedia.org/wiki/Contributor_License_Agreement
+[psr-0]: https://github.com/php-fig/fig-standards/blob/master/accepted/PSR-0.md
+[psr-1]: https://github.com/php-fig/fig-standards/blob/master/accepted/PSR-1-basic-coding-standard.md
+[psr-2]: https://github.com/php-fig/fig-standards/blob/master/accepted/PSR-2-coding-style-guide.md
+[php-fig]: http://php-fig.org
+[cs-fixer]: http://cs.sensiolabs.org/
+[user-guide]: http://docs.aws.amazon.com/aws-sdk-php/guide/latest/index.html
+[sphinx]: http://sphinx-doc.org/
+[restructuredtext]: http://sphinx-doc.org/rest.html
+[docs-readme]: https://github.com/aws/aws-sdk-php/blob/master/docs/README.md
+[mod-silex]: https://github.com/aws/aws-sdk-php-silex
+[mod-laravel]: https://github.com/aws/aws-sdk-php-laravel
+[mod-zf2]: https://github.com/aws/aws-sdk-php-zf2
+[pr-84]: https://github.com/aws/aws-sdk-php/pull/84
diff --git a/vendor/aws/aws-sdk-php/LICENSE.md b/vendor/aws/aws-sdk-php/LICENSE.md
new file mode 100644
index 0000000..8d53e9f
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/LICENSE.md
@@ -0,0 +1,141 @@
+# Apache License
+Version 2.0, January 2004
+
+TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+## 1. Definitions.
+
+"License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1
+through 9 of this document.
+
+"Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the
+License.
+
+"Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled
+by, or are under common control with that entity. For the purposes of this definition, "control" means
+(i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract
+or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial
+ownership of such entity.
+
+"You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License.
+
+"Source" form shall mean the preferred form for making modifications, including but not limited to software
+source code, documentation source, and configuration files.
+
+"Object" form shall mean any form resulting from mechanical transformation or translation of a Source form,
+including but not limited to compiled object code, generated documentation, and conversions to other media
+types.
+
+"Work" shall mean the work of authorship, whether in Source or Object form, made available under the License,
+as indicated by a copyright notice that is included in or attached to the work (an example is provided in the
+Appendix below).
+
+"Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from)
+the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent,
+as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not
+include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work
+and Derivative Works thereof.
+
+"Contribution" shall mean any work of authorship, including the original version of the Work and any
+modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to
+Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to
+submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of
+electronic, verbal, or written communication sent to the Licensor or its representatives, including but not
+limited to communication on electronic mailing lists, source code control systems, and issue tracking systems
+that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but
+excluding communication that is conspicuously marked or otherwise designated in writing by the copyright
+owner as "Not a Contribution."
+
+"Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been
+received by Licensor and subsequently incorporated within the Work.
+
+## 2. Grant of Copyright License.
+
+Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual,
+worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare
+Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such
+Derivative Works in Source or Object form.
+
+## 3. Grant of Patent License.
+
+Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual,
+worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent
+license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such
+license applies only to those patent claims licensable by such Contributor that are necessarily infringed by
+their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such
+Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim
+or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work
+constitutes direct or contributory patent infringement, then any patent licenses granted to You under this
+License for that Work shall terminate as of the date such litigation is filed.
+
+## 4. Redistribution.
+
+You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without
+modifications, and in Source or Object form, provided that You meet the following conditions:
+
+ 1. You must give any other recipients of the Work or Derivative Works a copy of this License; and
+
+ 2. You must cause any modified files to carry prominent notices stating that You changed the files; and
+
+ 3. You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent,
+ trademark, and attribution notices from the Source form of the Work, excluding those notices that do
+ not pertain to any part of the Derivative Works; and
+
+ 4. If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that
+ You distribute must include a readable copy of the attribution notices contained within such NOTICE
+ file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed as part of the Derivative Works; within
+ the Source form or documentation, if provided along with the Derivative Works; or, within a display
+ generated by the Derivative Works, if and wherever such third-party notices normally appear. The
+ contents of the NOTICE file are for informational purposes only and do not modify the License. You may
+ add Your own attribution notices within Derivative Works that You distribute, alongside or as an
+ addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be
+ construed as modifying the License.
+
+You may add Your own copyright statement to Your modifications and may provide additional or different license
+terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative
+Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the
+conditions stated in this License.
+
+## 5. Submission of Contributions.
+
+Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by
+You to the Licensor shall be under the terms and conditions of this License, without any additional terms or
+conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate
+license agreement you may have executed with Licensor regarding such Contributions.
+
+## 6. Trademarks.
+
+This License does not grant permission to use the trade names, trademarks, service marks, or product names of
+the Licensor, except as required for reasonable and customary use in describing the origin of the Work and
+reproducing the content of the NOTICE file.
+
+## 7. Disclaimer of Warranty.
+
+Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor
+provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT,
+MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the
+appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of
+permissions under this License.
+
+## 8. Limitation of Liability.
+
+In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless
+required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any
+Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential
+damages of any character arising as a result of this License or out of the use or inability to use the Work
+(including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or
+any and all other commercial damages or losses), even if such Contributor has been advised of the possibility
+of such damages.
+
+## 9. Accepting Warranty or Additional Liability.
+
+While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for,
+acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this
+License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole
+responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold
+each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason
+of your accepting any such warranty or additional liability.
+
+END OF TERMS AND CONDITIONS
diff --git a/vendor/aws/aws-sdk-php/NOTICE.md b/vendor/aws/aws-sdk-php/NOTICE.md
new file mode 100644
index 0000000..8485853
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/NOTICE.md
@@ -0,0 +1,112 @@
+# AWS SDK for PHP
+
+
+
+Copyright 2010-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
+
+Licensed under the Apache License, Version 2.0 (the "License").
+You may not use this file except in compliance with the License.
+A copy of the License is located at
+
+
+
+or in the "license" file accompanying this file. This file is distributed
+on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
+express or implied. See the License for the specific language governing
+permissions and limitations under the License.
+
+# Guzzle
+
+
+
+Copyright (c) 2011 Michael Dowling, https://github.com/mtdowling
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in
+all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+THE SOFTWARE.
+
+# Symfony
+
+
+
+Copyright (c) 2004-2012 Fabien Potencier
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is furnished
+to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+THE SOFTWARE.
+
+# Doctrine Common
+
+
+
+Copyright (c) 2006-2012 Doctrine Project
+
+Permission is hereby granted, free of charge, to any person obtaining a copy of
+this software and associated documentation files (the "Software"), to deal in
+the Software without restriction, including without limitation the rights to
+use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
+of the Software, and to permit persons to whom the Software is furnished to do
+so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+# Monolog
+
+
+
+Copyright (c) Jordi Boggiano
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is furnished
+to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+THE SOFTWARE.
diff --git a/vendor/aws/aws-sdk-php/README.md b/vendor/aws/aws-sdk-php/README.md
new file mode 100644
index 0000000..abadfd3
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/README.md
@@ -0,0 +1,171 @@
+# AWS SDK for PHP
+
+[](https://packagist.org/packages/aws/aws-sdk-php)
+[](https://packagist.org/packages/aws/aws-sdk-php)
+[](https://travis-ci.org/aws/aws-sdk-php)
+[](https://scrutinizer-ci.com/g/aws/aws-sdk-php/)
+
+The **AWS SDK for PHP** enables PHP developers to use [Amazon Web Services][aws] in their PHP code, and build robust
+applications and software using services like Amazon S3, Amazon DynamoDB, Amazon Glacier, etc. You can get started in
+minutes by [installing the SDK through Composer][docs-installation] or by downloading a single [zip][install-zip] or
+[phar][install-phar] file.
+
+## Resources
+
+* [User Guide][docs-guide] – For in-depth getting started and usage information
+* [API Docs][docs-api] – For operations, parameters, responses, and examples
+* [Blog][sdk-blog] – Tips & tricks, articles, and announcements
+* [Sample Project][sdk-sample] - A quick, sample project to help get you started
+* [Forum][sdk-forum] – Ask questions, get help, and give feedback
+* [Issues][sdk-issues] – Report issues and submit pull requests (see [Apache 2.0 License][sdk-license])
+* [@awsforphp][sdk-twitter] – Follow us on Twitter
+
+**NEW:** Watch our video — **[Mastering the AWS SDK for PHP](http://youtu.be/_zaW2VZB1ok)** from AWS re:Invent 2013!
+
+## Features
+
+* Provides easy-to-use HTTP clients for all supported AWS [services][docs-services], [regions][docs-rande], and
+ authentication protocols.
+* Is built for PHP 5.3.3+ and is compliant with [PSR-0][], [PSR-1][], and [PSR-2][].
+* Is easy to install through [Composer][install-packagist], [PEAR][install-pear], or single download ([zip][install-zip]
+ or [phar][install-phar]).
+* Is built on [Guzzle v3][guzzle], and utilizes many of its features including persistent connections, parallel requests,
+ events and plugins (via [Symfony2 EventDispatcher][symfony2-events]), service descriptions, [over-the-wire
+ logging][docs-wire-logging], caching, flexible batching, and request retrying with truncated exponential backoff.
+* Provides convenience features including easy response pagination via [Iterators][docs-iterators], resource
+ [Waiters][docs-waiters], and simple [modelled responses][docs-models].
+* Allows you to [sync local directories to Amazon S3 buckets][docs-s3-sync].
+* Provides a [multipart uploader tool][docs-s3-multipart] for Amazon S3 and Amazon Glacier that can be paused and
+ resumed.
+* Provides an [Amazon S3 Stream Wrapper][docs-streamwrapper], so that you can use PHP's native file handling functions
+ to interact with your S3 buckets and objects like a local filesystem.
+* Provides the [Amazon DynamoDB Session Handler][docs-ddbsh] for easily scaling sessions on a fast, NoSQL database.
+* Automatically uses [IAM Instance Profile Credentials][aws-iam-credentials] on configured Amazon EC2 instances.
+
+## Getting Started
+
+1. **Sign up for AWS** – Before you begin, you need to [sign up for an AWS account][docs-signup] and retrieve your AWS
+ credentials.
+1. **Minimum requirements** – To run the SDK, your system will need to meet the [minimum
+ requirements][docs-requirements], including having **PHP 5.3.3+** compiled with the cURL extension and cURL 7.16.2+
+ compiled with OpenSSL and zlib.
+1. **Install the SDK** – Using [Composer][] is the recommended way to install the AWS SDK for PHP. The SDK is available
+ via [Packagist][] under the [`aws/aws-sdk-php`][install-packagist] package. Please see the
+ [Installation section of the User Guide][docs-installation] for more detailed information about installing the SDK
+ through Composer and other means (e.g., [Phar][install-phar], [Zip][install-zip], [PEAR][install-pear]).
+1. **Using the SDK** – The best way to become familiar with how to use the SDK is to read the [User Guide][docs-guide].
+ The [Getting Started Guide][docs-quickstart] will help you become familiar with the basic concepts, and there are
+ also specific guides for each of the [supported services][docs-services].
+
+## Quick Example
+
+### Upload a File to Amazon S3
+
+```php
+get('s3');
+
+// Upload a publicly accessible file.
+// The file size, file type, and MD5 hash are automatically calculated by the SDK
+try {
+ $s3->putObject(array(
+ 'Bucket' => 'my-bucket',
+ 'Key' => 'my-object',
+ 'Body' => fopen('/path/to/file', 'r'),
+ 'ACL' => 'public-read',
+ ));
+} catch (S3Exception $e) {
+ echo "There was an error uploading the file.\n";
+}
+```
+
+You can also use the even easier `upload()` method, which will automatically do either single or multipart uploads,
+as needed.
+
+```php
+try {
+ $s3->upload('my-bucket', 'my-object', fopen('/path/to/file', 'r'), 'public-read');
+} catch (S3Exception $e) {
+ echo "There was an error uploading the file.\n";
+}
+```
+
+### More Examples
+
+* [Get an object from Amazon S3 and save it to a file][example-s3-getobject]
+* [Upload a large file to Amazon S3 in parts][example-s3-multipart]
+* [Put an item in your Amazon DynamoDB table][example-dynamodb-putitem]
+* [Send a message to your Amazon SQS queue][example-sqs-sendmessage]
+* Please browse the [User Guide][docs-guide] and [API docs][docs-api] or check out our [AWS SDK Development
+ Blog][sdk-blog] for even more examples.
+
+### Related Projects
+
+* [AWS Service Provider for Laravel][mod-laravel]
+* [AWS SDK ZF2 Module][mod-zf2]
+* [AWS Service Provider for Silex][mod-silex]
+* [Guzzle v3][guzzle-docs] – PHP HTTP client and framework
+* Other [AWS SDKs & Tools][aws-tools] (e.g., js, cli, ruby, python, java, etc.)
+
+[sdk-website]: http://aws.amazon.com/sdkforphp
+[sdk-forum]: https://forums.aws.amazon.com/forum.jspa?forumID=80
+[sdk-issues]: https://github.com/aws/aws-sdk-php/issues
+[sdk-license]: http://aws.amazon.com/apache2.0/
+[sdk-blog]: http://blogs.aws.amazon.com/php
+[sdk-twitter]: https://twitter.com/awsforphp
+[sdk-sample]: http://aws.amazon.com/developers/getting-started/php
+
+[install-packagist]: https://packagist.org/packages/aws/aws-sdk-php
+[install-phar]: http://pear.amazonwebservices.com/get/aws.phar
+[install-zip]: http://pear.amazonwebservices.com/get/aws.zip
+[install-pear]: http://pear.amazonwebservices.com
+
+[docs-api]: http://docs.aws.amazon.com/aws-sdk-php/latest/index.html
+[docs-guide]: http://docs.aws.amazon.com/aws-sdk-php/guide/latest/index.html
+[docs-contribution]: https://github.com/aws/aws-sdk-php/blob/master/CONTRIBUTING.md
+[docs-performance]: http://docs.aws.amazon.com/aws-sdk-php/guide/latest/performance.html
+[docs-migration]: http://docs.aws.amazon.com/aws-sdk-php/guide/latest/migration-guide.html
+[docs-signup]: http://docs.aws.amazon.com/aws-sdk-php/guide/latest/awssignup.html
+[docs-requirements]: http://docs.aws.amazon.com/aws-sdk-php/guide/latest/requirements.html
+[docs-installation]: http://docs.aws.amazon.com/aws-sdk-php/guide/latest/installation.html
+[docs-quickstart]: http://docs.aws.amazon.com/aws-sdk-php/guide/latest/quick-start.html
+[docs-iterators]: http://docs.aws.amazon.com/aws-sdk-php/guide/latest/quick-start.html#iterators
+[docs-waiters]: http://docs.aws.amazon.com/aws-sdk-php/guide/latest/feature-waiters.html
+[docs-models]: http://docs.aws.amazon.com/aws-sdk-php/guide/latest/feature-models.html
+[docs-exceptions]: http://docs.aws.amazon.com/aws-sdk-php/guide/latest/quick-start.html#error-handling
+[docs-wire-logging]: http://docs.aws.amazon.com/aws-sdk-php/guide/latest/faq.html#how-can-i-see-what-data-is-sent-over-the-wire
+[docs-services]: http://docs.aws.amazon.com/aws-sdk-php/guide/latest/index.html#supported-services
+[docs-ddbsh]: http://docs.aws.amazon.com/aws-sdk-php/guide/latest/feature-dynamodb-session-handler.html
+[docs-rande]: http://docs.aws.amazon.com/general/latest/gr/rande.html
+[docs-streamwrapper]: http://docs.aws.amazon.com/aws-sdk-php/guide/latest/service-s3.html#amazon-s3-stream-wrapper
+[docs-s3-sync]: http://blogs.aws.amazon.com/php/post/Tx2W9JAA7RXVOXA/Syncing-Data-with-Amazon-S3
+[docs-s3-multipart]: http://docs.aws.amazon.com/aws-sdk-php/guide/latest/service-s3.html#uploading-large-files-using-multipart-uploads
+
+[aws]: http://aws.amazon.com
+[aws-iam-credentials]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/UsingIAM.html#UsingIAMrolesWithAmazonEC2Instances
+[aws-tools]: http://aws.amazon.com/tools
+[guzzle]: https://github.com/guzzle/guzzle3
+[guzzle-docs]: https://guzzle3.readthedocs.org
+[composer]: http://getcomposer.org
+[packagist]: http://packagist.org
+[psr-0]: https://github.com/php-fig/fig-standards/blob/master/accepted/PSR-0.md
+[psr-1]: https://github.com/php-fig/fig-standards/blob/master/accepted/PSR-1-basic-coding-standard.md
+[psr-2]: https://github.com/php-fig/fig-standards/blob/master/accepted/PSR-2-coding-style-guide.md
+[symfony2-events]: http://symfony.com/doc/2.3/components/event_dispatcher/introduction.html
+
+[example-sqs-sendmessage]: http://docs.aws.amazon.com/aws-sdk-php/guide/latest/service-sqs.html#sending-messages
+[example-s3-getobject]: http://docs.aws.amazon.com/aws-sdk-php/guide/latest/service-s3.html#saving-objects-to-a-file
+[example-s3-multipart]: http://docs.aws.amazon.com/aws-sdk-php/guide/latest/service-s3.html#uploading-large-files-using-multipart-uploads
+[example-dynamodb-putitem]: http://docs.aws.amazon.com/aws-sdk-php/guide/latest/service-dynamodb.html#adding-items
+
+[mod-laravel]: https://github.com/aws/aws-sdk-php-laravel
+[mod-zf2]: https://github.com/aws/aws-sdk-php-zf2
+[mod-silex]: https://github.com/aws/aws-sdk-php-silex
diff --git a/vendor/aws/aws-sdk-php/UPGRADING.md b/vendor/aws/aws-sdk-php/UPGRADING.md
new file mode 100644
index 0000000..0984c41
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/UPGRADING.md
@@ -0,0 +1,263 @@
+Upgrading Guide
+===============
+
+Upgrade from 2.5 to 2.6
+-----------------------
+
+**IMPORTANT:** Version 2.6 *is* backwards compatible with version 2.5, *unless* you are using the Amazon CloudSearch
+client. If you are using CloudSearch, please read the next section carefully.
+
+### Amazon CloudSearch
+
+Version 2.6 of the AWS SDK for PHP has been updated to use the 2013-01-01 API version of Amazon CloudSearch by default.
+
+The 2013-01-01 API marks a significant upgrade of Amazon CloudSearch, but includes numerous breaking changes to the API.
+CloudSearch now supports 33 languages, highlighting, autocomplete suggestions, geospatial search, AWS IAM integration to
+control access to domain configuration actions, and user configurable scaling and availability options. These new
+features are reflected in the changes to the method and parameters of the CloudSearch client.
+
+For details about the new API and how to update your usage of CloudSearch, please consult the [Configuration API
+Reference for Amazon CloudSearch](http://docs.aws.amazon.com/cloudsearch/latest/developerguide/configuration-api.html)
+and the guide for [Migrating to the Amazon CloudSearch 2013-01-01 API](http://docs.aws.amazon.com/cloudsearch/latest/developerguide/migrating.html).
+
+If you would like to continue using the older 2011-02-01 API, you can configure this when you instantiate the
+`CloudSearchClient`:
+
+```php
+use Aws\CloudSearch\CloudSearchClient;
+
+$client = CloudSearchClient::factory(array(
+ 'key' => '',
+ 'secret' => '',
+ 'region' => '',
+ 'version' => '2011-02-01',
+));
+```
+
+Upgrade from 2.4 to 2.5
+-----------------------
+
+### Amazon EC2
+
+A small, backwards-incompatible change has been made to the Amazon EC2 API. The `LaunchConfiguration.MonitoringEnabled`
+parameter of the `RequestSpotInstances` operation has been change to `LaunchConfiguration.Monitoring.Enabled` See [this
+commit](https://github.com/aws/aws-sdk-php/commit/36ae0f68d2a6dcc3bc28222f60ecb318449c4092#diff-bad2f6eac12565bb684f2015364c22bd)
+for the exact change. You are only affected by this change if you are using this specific parameter. To fix your code to
+work with the updated parameter, you will need to change the structure of your request slightly.
+
+```php
+// The OLD way
+$result = $ec2->requestSpotInstances(array(
+ // ...
+ 'LaunchSpecification' => array(
+ // ...
+ 'MonitoringEnabled' => true,
+ // ...
+ ),
+ // ...
+));
+
+// The NEW way
+$result = $ec2->requestSpotInstances(array(
+ // ...
+ 'LaunchSpecification' => array(
+ // ...
+ 'Monitoring' => array(
+ 'Enabled' => true,
+ ),
+ // ...
+ ),
+ // ...
+));
+```
+
+### AWS CloudTrail
+
+AWS CloudTrail has made changes to their API. If you are not using the CloudTrail service, then you will not be
+affected by this change.
+
+Here is an excerpt (with minor modifications) directly from the [CloudTrail team's
+announcement](https://forums.aws.amazon.com/ann.jspa?annID=2286) regarding this change:
+
+> [...] We have made some minor improvements/fixes to the service API, based on early feedback. The impact of these
+> changes to you depends on how you are currently interacting with the CloudTrail service. [...] If you have code that
+> calls the APIs below, you will need to make minor changes.
+>
+> There are two changes:
+>
+> 1) `CreateTrail` / `UpdateTrail`: These APIs originally took a single parameter, a `Trail` object. [...] We have
+> changed this so that you can now simply pass individual parameters directly to these APIs. The same applies to the
+> responses of these APIs, namely the APIs return individual fields directly [...]
+> 2) `GetTrailStatus`: The actual values of the fields returned and their data types were not all as intended. As such,
+> we are deprecating a set of fields, and adding a new set of replacement fields. The following fields are now
+> deprecated, and should no longer be used:
+>
+> * `LatestDeliveryAttemptTime` (String): Time CloudTrail most recently attempted to deliver a file to S3 configured
+> bucket.
+> * `LatestNotificationAttemptTime` (String): As above, but for publishing a notification to configured SNS topic.
+> * `LatestDeliveryAttemptSucceeded` (String): This one had a mismatch between implementation and documentation. As
+> documented: whether or not the latest file delivery was successful. As implemented: Time of most recent successful
+> file delivery.
+> * `LatestNotificationAttemptSucceeded` (String): As above, but for SNS notifications.
+> * `TimeLoggingStarted` (String): Time `StartLogging` was most recently called. [...]
+> * `TimeLoggingStarted` (String): Time `StopLogging` was most recently called.
+>
+> The following fields are new, and replace the fields above:
+>
+> * `LatestDeliveryTime` (Date): Date/Time that CloudTrail most recently delivered a log file.
+> * `LatestNotificationTime` (Date): As above, for SNS notifications.
+> * `StartLoggingTime` (Date): Same as `TimeLoggingStarted`, but with more consistent naming, and correct data type.
+> * `StopLoggingTime` (Date): Same as `TimeLoggingStopped`, but with more consistent naming, and correct data type.
+>
+> Note that `LatestDeliveryAttemptSucceeded` and `LatestNotificationAttemptSucceeded` have no direct replacement. To
+> query whether everything is configured correctly for log file delivery, it is sufficient to query LatestDeliveryError,
+> and if non-empty that means that there is a configuration problem preventing CloudTrail from being able to deliver
+> logs successfully. Basically either the bucket doesn’t exist, or CloudTrail doesn’t have sufficient permissions to
+> write to the configured path in the bucket. Likewise for `LatestNotificationAttemptSucceeded`.
+>
+> The deprecated fields will be removed in the future, no earlier than February 15. Both set of fields will coexist on
+> the service during this period to give those who are using the deprecated fields time to switch over to the use the
+> new fields. However new SDKs and CLIs will remove the deprecated fields sooner than that. Previous SDK and CLI
+> versions will continue to work until the deprecated fields are removed from the service.
+>
+> We apologize for any inconvenience, and appreciate your understanding as we make these adjustments to improve the
+> long-term usability of the CloudTrail APIs.
+
+We are marking this as a breaking change now, preemptive of the February 15th cutoff, and we encourage everyone to
+update their code now. The changes to how you use `createTrail()` and `updateTrail()` are easy changes:
+
+```php
+// The OLD way
+$cloudTrail->createTrail(array(
+ 'trail' => array(
+ 'Name' => 'TRAIL_NAME',
+ 'S3BucketName' => 'BUCKET_NAME',
+ )
+));
+
+// The NEW way
+$cloudTrail->createTrail(array(
+ 'Name' => 'TRAIL_NAME',
+ 'S3BucketName' => 'BUCKET_NAME',
+));
+```
+
+### China (Beijing) Region / Signatures
+
+This release adds support for the new China (Beijing) Region. This region requires that Signature V4 be used for both
+Amazon S3 and Amazon EC2 requests. We've added support for Signature V4 in both of these services for clients
+configured for this region. While doing this work, we did some refactoring to the signature classes and also removed
+support for Signature V3, as it is no longer needed. Unless you are explicitly referencing Signature V3 or explicitly
+interacting with signature objects, these changes should not affect you.
+
+Upgrade from 2.3 to 2.4
+-----------------------
+
+### Amazon CloudFront Client
+
+The new 2013-05-12 API version of Amazon CloudFront includes support for custom SSL certificates via the
+`ViewerCertificate` parameter, but also introduces breaking changes to the API. Version 2.4 of the SDK now ships with
+two versions of the Amazon CloudFront service description, one for the new 2013-05-12 API and one for the next most
+recent 2012-05-05 API. The SDK defaults to using the newest API version, so CloudFront users may experience a breaking
+change to their projects when upgrading. This can be easily circumvented by switching back to the 2012-05-05 API by
+using the `version` option when instantiating the CloudFront client.
+
+### Guzzle 3.7
+
+Version 2.4 of the AWS SDK for PHP requires at least version 3.7 of Guzzle.
+
+Upgrade from 2.2 to 2.3
+-----------------------
+
+### Amazon DynamoDB Client
+
+The newly released 2012-08-10 API version of the Amazon DynamoDB service includes the new Local Secondary Indexes
+feature, but also introduces breaking changes to the API. The most notable change is in the way that you specify keys
+when creating tables and retrieving items. Version 2.3 of the SDK now ships with 2 versions of the DynamoDB service
+description, one for the new 2012-08-10 API and one for the next most recent 2011-12-05 API. The SDK defaults to using
+the newest API version, so DynamoDB users may experience a breaking change to their projects when upgrading. This can be
+easily fixed by switching back to the 2011-12-05 API by using the new `version` configuration setting when instantiating
+the DynamoDB client.
+
+```php
+use Aws\DynamoDb\DynamoDbClient;
+
+$client = DynamoDbClient::factory(array(
+ 'key' => '',
+ 'secret' => '',
+ 'region' => '',
+ 'version' => '2011-12-05'
+));
+```
+
+If you are using a config file with `Aws\Common\Aws`, then you can modify your file like the following.
+
+```json
+{
+ "includes": ["_aws"],
+ "services": {
+ "default_settings": {
+ "params": {
+ "key": "",
+ "secret": "",
+ "region": ""
+ }
+ },
+ "dynamodb": {
+ "extends": "dynamodb",
+ "params": {
+ "version": "2011-12-05"
+ }
+ }
+ }
+}
+```
+
+The [SDK user guide](http://docs.aws.amazon.com/aws-sdk-php/guide/latest/index.html) has a guide and examples for both
+versions of the API.
+
+### Guzzle 3.4.1
+
+Version 2.3 of the AWS SDK for PHP requires at least version 3.4.1 of Guzzle.
+
+Upgrade from 2.1 to 2.2
+-----------------------
+
+### Full Service Coverage
+
+The AWS SDK for PHP now supports the full set of AWS services.
+
+### Guzzle 3.3
+
+Version 2.2 of the AWS SDK for PHP requires at least version 3.3 of Guzzle.
+
+Upgrade from 2.0 to 2.1
+-----------------------
+
+### General
+
+Service descriptions are now versioned under the Resources/ directory of each client.
+
+### Waiters
+
+Waiters now require an associative array as input for the underlying operation performed by a waiter. The configuration
+system for waiters under 2.0.x utilized strings to determine the parameters used to create an operation. For example,
+when waiting for an object to exist with Amazon S3, you would pass a string containing the bucket name concatenated
+with the object name using a '/' separator (e.g. 'foo/baz'). In the 2.1 release, these parameters are now more
+explicitly tied to the underlying operation utilized by a waiter. For example, to use the ObjectExists waiter of
+Amazon S3 pass an associative array of `array('Bucket' => 'foo', 'Key' => 'baz')`. These options match the option names
+and rules associated with the HeadObject operation performed by the waiter. The API documentation of each client
+describes the waiters associated with the client and what underlying operation is responsible for waiting on the
+resource. Waiter specific options like the maximum number of attempts (max_attempts) or interval to wait between
+retries (interval) can be specified in this same configuration array by prefixing the keys with `waiter.`.
+
+Waiters can also be invoked using magic methods on the client. These magic methods are listed in each client's docblock
+using `@method` tags.
+
+```php
+$s3Client->waitUntilObjectExists(array(
+ 'Bucket' => 'foo',
+ 'Key' => 'bar',
+ 'waiter.max_attempts' => 3
+));
+```
diff --git a/vendor/aws/aws-sdk-php/build.xml b/vendor/aws/aws-sdk-php/build.xml
new file mode 100644
index 0000000..f4c5cde
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/build.xml
@@ -0,0 +1,254 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ php -d mock=true `which phpunit` -c phpunit.functional.xml
+
+
+
+
+
+
+
+
+ You must copy phpunit.functional.dist to phpunit.functional.xml and modify the appropriate property settings
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/vendor/aws/aws-sdk-php/build/aws-autoloader.php b/vendor/aws/aws-sdk-php/build/aws-autoloader.php
new file mode 100644
index 0000000..2b7d7ac
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/build/aws-autoloader.php
@@ -0,0 +1,35 @@
+registerNamespaces(array(
+ 'Aws' => AWS_FILE_PREFIX,
+ 'Guzzle' => AWS_FILE_PREFIX,
+ 'Symfony' => AWS_FILE_PREFIX,
+ 'Doctrine' => AWS_FILE_PREFIX,
+ 'Psr' => AWS_FILE_PREFIX,
+ 'Monolog' => AWS_FILE_PREFIX
+));
+
+$classLoader->register();
+
+return $classLoader;
diff --git a/vendor/aws/aws-sdk-php/build/phar-stub.php b/vendor/aws/aws-sdk-php/build/phar-stub.php
new file mode 100644
index 0000000..5c36dcc
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/build/phar-stub.php
@@ -0,0 +1,36 @@
+isCli = php_sapi_name() == 'cli';
+ $title = 'AWS SDK for PHP Compatibility Test';
+ if ($this->isCli) {
+ $rep = str_repeat('=', strlen($title));
+ $this->lines[] = "{$rep}\n{$title}\n{$rep}";
+ } else {
+ $this->lines[] = sprintf(
+ '',
+ 'html {font-family:verdana;} .OK {color: #166116;}',
+ '.FAIL {margin-top: 1em; color: #A52C27;} .WARNING {margin-top: 1em; color:#6B036B;}'
+ );
+ $this->lines[] = "
";
+ }
+ $this->write($text);
+ }
+
+ public function addRecommend($info, $func, $text)
+ {
+ $this->check($info, $func, $text, false);
+ }
+
+ public function addRequire($info, $func, $text)
+ {
+ $this->check($info, $func, $text, true);
+ }
+
+ public function iniCheck($info, $setting, $expected, $required = true, $help = null)
+ {
+ $current = ini_get($setting);
+ $cb = function () use ($current, $expected) {
+ return is_callable($expected)
+ ? call_user_func($expected, $current)
+ : $current == $expected;
+ };
+
+ $message = sprintf(
+ '%s in %s is currently set to %s but %s be set to %s.',
+ $setting,
+ php_ini_loaded_file(),
+ var_export($current, true),
+ $required ? 'must' : 'should',
+ var_export($expected, true)
+ ) . ' ' . $help;
+
+ $this->check($info, $cb, trim($message), $required);
+ }
+
+ public function extCheck($ext, $required = true, $help = '')
+ {
+ $info = sprintf('Checking if the %s extension is installed', $ext);
+ $cb = function () use ($ext) { return extension_loaded($ext); };
+ $message = $help ?: sprintf('The %s extension %s be installed', $ext, $required ? 'must' : 'should');
+ $this->check($info, $cb, $message, $required);
+ }
+}
+
+$c = new CompatibilityTest();
+$c->title('System requirements');
+$c->addRequire(
+ 'Ensuring that the version of PHP is >= 5.3.3',
+ function () { return version_compare(phpversion(), '5.3.3', '>='); },
+ 'You must update your version of PHP to 5.3.3 to run the AWS SDK for PHP'
+);
+
+$c->iniCheck('Ensuring that detect_unicode is disabled', 'detect_unicode', false, true, 'Enabling detect_unicode may cause errors when using phar files. See https://bugs.php.net/bug.php?id=42396');
+$c->iniCheck('Ensuring that session.auto_start is disabled', 'session.auto_start', false);
+
+if (extension_loaded('suhosin')) {
+ $c->addRequire(
+ 'Ensuring that phar files can be run with the suhosin patch',
+ function () {
+ return false !== stripos(ini_get('suhosin.executor.include.whitelist'), 'phar');
+ },
+ sprintf('suhosin.executor.include.whitelist must be configured to include "phar" in %s so that the phar file works correctly', php_ini_loaded_file())
+ );
+}
+
+foreach (array('pcre', 'spl', 'json', 'dom', 'simplexml', 'curl') as $ext) {
+ $c->extCheck($ext, true);
+}
+
+if (function_exists('curl_version')) {
+ $c->addRequire('Ensuring that cURL can send https requests', function () {
+ $version = curl_version();
+ return in_array('https', $version['protocols'], true);
+ }, 'cURL must be able to send https requests');
+}
+
+$c->addRequire('Ensuring that file_get_contents works', function () {
+ return function_exists('file_get_contents');
+}, 'file_get_contents has been disabled');
+
+$c->title('System recommendations');
+
+$c->addRecommend(
+ 'Checking if PHP version is >= 5.4.1',
+ function () { return version_compare(phpversion(), '5.4.1', '>='); },
+ 'You are using an older version of PHP (' . phpversion() . '). Consider updating to PHP 5.4.1 or newer to improve the performance and stability of the SDK.'
+);
+
+$c->addRecommend('Checking if you are running on a 64-bit platform', function () {
+ return PHP_INT_MAX === 9223372036854775807;
+}, 'You are not running on a 64-bit installation of PHP. You may run into issues uploading or downloading files larger than 2GB.');
+
+$c->iniCheck('Ensuring that zend.enable_gc is enabled', 'zend.enable_gc', true, false);
+
+$c->check('Ensuring that date.timezone is set', function () {
+ return (bool) ini_get('date.timezone');
+}, 'The date.timezone PHP ini setting has not been set in ' . php_ini_loaded_file(), false);
+
+if (extension_loaded('xdebug')) {
+ $c->addRecommend('Checking if Xdebug is installed', function () { return false; }, 'Xdebug is installed. Consider uninstalling Xdebug to make the SDK run much faster.');
+ $c->iniCheck('Ensuring that Xdebug\'s infinite recursion detection does not erroneously cause a fatal error', 'xdebug.max_nesting_level', 0, false);
+}
+
+$c->extCheck('openssl', false);
+$c->extCheck('zlib', false);
+$c->extCheck('uri_template', false, 'Installing the uri_template extension will make the SDK faster. Install using pecl install uri_template-alpha');
+
+// Is an opcode cache installed or are they running >= PHP 5.5?
+$c->addRecommend(
+ 'Checking if an opcode cache is installed',
+ function () {
+ return version_compare(phpversion(), '5.5.0', '>=') || extension_loaded('apc') || extension_loaded('xcache');
+ },
+ 'You are not utilizing an opcode cache. Consider upgrading to PHP >= 5.5 or install APC.'
+);
+
+$c->title('PHP information');
+ob_start();
+phpinfo(INFO_GENERAL);
+$info = ob_get_clean();
+$c->write($c->quote($info));
+
+$c->endTest();
diff --git a/vendor/aws/aws-sdk-php/composer.json b/vendor/aws/aws-sdk-php/composer.json
new file mode 100755
index 0000000..fd7cabd
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/composer.json
@@ -0,0 +1,47 @@
+{
+ "name": "aws/aws-sdk-php",
+ "homepage": "http://aws.amazon.com/sdkforphp",
+ "description": "AWS SDK for PHP - Use Amazon Web Services in your PHP project",
+ "keywords": ["aws","amazon","sdk","s3","ec2","dynamodb","cloud","glacier"],
+ "type": "library",
+ "license": "Apache-2.0",
+ "authors": [
+ {
+ "name": "Amazon Web Services",
+ "homepage": "http://aws.amazon.com"
+ }
+ ],
+ "support": {
+ "forum": "https://forums.aws.amazon.com/forum.jspa?forumID=80",
+ "issues": "https://github.com/aws/aws-sdk-php/issues"
+ },
+ "require": {
+ "php": ">=5.3.3",
+ "guzzle/guzzle": ">=3.7.0,<=3.9.9"
+ },
+ "suggest": {
+ "doctrine/cache": "Adds support for caching of credentials and responses",
+ "ext-apc": "Allows service description opcode caching, request and response caching, and credentials caching",
+ "ext-openssl": "Allows working with CloudFront private distributions and verifying received SNS messages",
+ "monolog/monolog": "Adds support for logging HTTP requests and responses",
+ "symfony/yaml": "Eases the ability to write manifests for creating jobs in AWS Import/Export"
+ },
+ "require-dev": {
+ "doctrine/cache": "~1.0",
+ "ext-openssl": "*",
+ "monolog/monolog": "1.4.*",
+ "phpunit/phpunit": "3.7.*",
+ "symfony/class-loader": "2.*",
+ "symfony/yaml": "2.*"
+ },
+ "autoload": {
+ "psr-0": {
+ "Aws": "src/"
+ }
+ },
+ "extra": {
+ "branch-alias": {
+ "dev-master": "2.6.x-dev"
+ }
+ }
+}
diff --git a/vendor/aws/aws-sdk-php/docs/Makefile b/vendor/aws/aws-sdk-php/docs/Makefile
new file mode 100644
index 0000000..3c2a1b0
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/Makefile
@@ -0,0 +1,160 @@
+# Makefile for Sphinx documentation
+#
+
+# You can set these variables from the command line.
+SPHINXOPTS =
+SPHINXBUILD = sphinx-build
+PAPER =
+BUILDDIR = _build
+TRACKING =
+
+# Internal variables.
+PAPEROPT_a4 = -D latex_paper_size=a4
+PAPEROPT_letter = -D latex_paper_size=letter
+ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
+# the i18n builder cannot share the environment and doctrees with the others
+I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
+
+.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
+
+help:
+ @echo "Please use \`make ' where is one of"
+ @echo " html to make standalone HTML files"
+ @echo " pdf to make PDF files"
+ @echo " dirhtml to make HTML files named index.html in directories"
+ @echo " singlehtml to make a single large HTML file"
+ @echo " pickle to make pickle files"
+ @echo " json to make JSON files"
+ @echo " htmlhelp to make HTML files and a HTML help project"
+ @echo " qthelp to make HTML files and a qthelp project"
+ @echo " devhelp to make HTML files and a Devhelp project"
+ @echo " epub to make an epub"
+ @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
+ @echo " latexpdf to make LaTeX files and run them through pdflatex"
+ @echo " text to make text files"
+ @echo " man to make manual pages"
+ @echo " texinfo to make Texinfo files"
+ @echo " info to make Texinfo files and run them through makeinfo"
+ @echo " gettext to make PO message catalogs"
+ @echo " changes to make an overview of all changed/added/deprecated items"
+ @echo " linkcheck to check all external links for integrity"
+ @echo " doctest to run all doctests embedded in the documentation (if enabled)"
+
+clean:
+ -rm -rf $(BUILDDIR)/*
+
+html:
+ $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
+ @echo
+ @echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
+
+pdf:
+ $(SPHINXBUILD) -b pdf $(ALLSPHINXOPTS) $(BUILDDIR)/pdf
+ @echo
+ @echo "Build finished. The PDF file is in $(BUILDDIR)/pdf."
+
+dirhtml:
+ $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
+ @echo
+ @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
+
+singlehtml:
+ $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
+ @echo
+ @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
+
+pickle:
+ $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
+ @echo
+ @echo "Build finished; now you can process the pickle files."
+
+json:
+ $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
+ @echo
+ @echo "Build finished; now you can process the JSON files."
+
+htmlhelp:
+ $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
+ @echo
+ @echo "Build finished; now you can run HTML Help Workshop with the" \
+ ".hhp project file in $(BUILDDIR)/htmlhelp."
+
+qthelp:
+ $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
+ @echo
+ @echo "Build finished; now you can run "qcollectiongenerator" with the" \
+ ".qhcp project file in $(BUILDDIR)/qthelp, like this:"
+ @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/AWSSDKforPHP.qhcp"
+ @echo "To view the help file:"
+ @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/AWSSDKforPHP.qhc"
+
+devhelp:
+ $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
+ @echo
+ @echo "Build finished."
+ @echo "To view the help file:"
+ @echo "# mkdir -p $$HOME/.local/share/devhelp/AWSSDKforPHP"
+ @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/AWSSDKforPHP"
+ @echo "# devhelp"
+
+epub:
+ $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
+ @echo
+ @echo "Build finished. The epub file is in $(BUILDDIR)/epub."
+
+latex:
+ $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
+ @echo
+ @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
+ @echo "Run \`make' in that directory to run these through (pdf)latex" \
+ "(use \`make latexpdf' here to do that automatically)."
+
+latexpdf:
+ $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
+ @echo "Running LaTeX files through pdflatex..."
+ $(MAKE) -C $(BUILDDIR)/latex all-pdf
+ @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
+
+text:
+ $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
+ @echo
+ @echo "Build finished. The text files are in $(BUILDDIR)/text."
+
+man:
+ $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
+ @echo
+ @echo "Build finished. The manual pages are in $(BUILDDIR)/man."
+
+texinfo:
+ $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
+ @echo
+ @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
+ @echo "Run \`make' in that directory to run these through makeinfo" \
+ "(use \`make info' here to do that automatically)."
+
+info:
+ $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
+ @echo "Running Texinfo files through makeinfo..."
+ make -C $(BUILDDIR)/texinfo info
+ @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
+
+gettext:
+ $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
+ @echo
+ @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
+
+changes:
+ $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
+ @echo
+ @echo "The overview file is in $(BUILDDIR)/changes."
+
+linkcheck:
+ $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
+ @echo
+ @echo "Link check complete; look for any errors in the above output " \
+ "or in $(BUILDDIR)/linkcheck/output.txt."
+
+doctest:
+ $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
+ @echo "Testing of doctests in the sources finished, look at the " \
+ "results in $(BUILDDIR)/doctest/output.txt."
diff --git a/vendor/aws/aws-sdk-php/docs/README.md b/vendor/aws/aws-sdk-php/docs/README.md
new file mode 100644
index 0000000..04328ba
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/README.md
@@ -0,0 +1,13 @@
+AWS SDK for PHP
+===============
+
+Documentation for the [AWS SDK for PHP](https://github.com/aws/aws-sdk-php).
+
+Building the documentation
+--------------------------
+
+The documentation is written in [reStructuredText](http://docutils.sourceforge.net/rst.html) and can be built using
+[Sphinx](http://sphinx.pocoo.org/).
+
+1. pip install -r requirements.txt
+2. Make the HTML documentation: ``make html``
diff --git a/vendor/aws/aws-sdk-php/docs/_ext/aws/__init__.py b/vendor/aws/aws-sdk-php/docs/_ext/aws/__init__.py
new file mode 100644
index 0000000..aa0d934
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/_ext/aws/__init__.py
@@ -0,0 +1,368 @@
+import os, re, subprocess, json, collections
+from sphinx.addnodes import toctree
+from docutils import io, nodes, statemachine, utils
+from docutils.parsers.rst import Directive
+from jinja2 import Environment, PackageLoader
+
+# Maintain a cache of previously loaded examples
+example_cache = {}
+
+# Maintain a cache of previously loaded service descriptions
+description_cache = {}
+
+
+def setup(app):
+ """
+ see: http://sphinx.pocoo.org/ext/appapi.html
+ this is the primary extension point for Sphinx
+ """
+ from sphinx.application import Sphinx
+ if not isinstance(app, Sphinx): return
+
+ app.add_role('regions', regions_role)
+ app.add_directive('service', ServiceIntro)
+ app.add_directive('apiref', ServiceApiRef)
+ app.add_directive('indexlinks', ServiceIndexLinks)
+ app.add_directive('example', ExampleDirective)
+
+
+def regions_role(name, rawtext, text, lineno, inliner, options={}, content={}):
+ """Inserts a list of regions available to a service name
+
+ Returns 2 part tuple containing list of nodes to insert into the
+ document and a list of system messages. Both are allowed to be
+ empty.
+
+ :param name: The role name used in the document.
+ :param rawtext: The entire markup snippet, with role.
+ :param text: The text marked with the role.
+ :param lineno: The line number where rawtext appears in the input.
+ :param inliner: The inliner instance that called us.
+ :param options: Directive options for customization.
+ :param content: The directive content for customization.
+ """
+ try:
+ service_name = str(text)
+ if not service_name:
+ raise ValueError
+ app = inliner.document.settings.env.app
+ node = make_regions_node(rawtext, app, str(service_name), options)
+ return [node], []
+ except ValueError:
+ msg = inliner.reporter.error(
+ 'The service name "%s" is invalid; ' % text, line=lineno)
+ prb = inliner.problematic(rawtext, rawtext, msg)
+ return [prb], [msg]
+
+
+def get_regions(service_name):
+ """Get the regions for a service by name
+
+ Returns a list of regions
+
+ :param service_name: Retrieve regions for this service by name
+ """
+ return load_service_description(service_name)['regions'].keys()
+
+
+def make_regions_node(rawtext, app, service_name, options):
+ """Create a list of regions for a service name
+
+ :param rawtext: Text being replaced with the list node.
+ :param app: Sphinx application context
+ :param service_name: Service name
+ :param options: Options dictionary passed to role func.
+ """
+ regions = get_regions(service_name)
+ return nodes.Text(", ".join(regions))
+
+
+class ServiceDescription():
+ """
+ Loads the service description for a given source file
+ """
+
+ def __init__(self, service):
+ self.service_name = service
+ self.description = self.load_description(self.determine_filename())
+
+ def determine_filename(self):
+ """Determines the filename to load for a service"""
+ # Determine the path to the aws-config
+ path = os.path.abspath("../src/Aws/Common/Resources/aws-config.php")
+ self.config = self.__load_php(path)
+
+ # Iterate over the loaded dictionary and see if a matching service exists
+ for key in self.config["services"]:
+ alias = self.config["services"][key].get("alias", "")
+ if key == self.service_name or alias == self.service_name:
+ break
+ else:
+ raise ValueError("No service matches %s" % (self.service_name))
+
+ # Determine the name of the client class to load
+ class_path = self.config["services"][key]["class"].replace("\\", "/")
+ client_path = os.path.abspath("../src/" + class_path + ".php")
+ contents = open(client_path, 'r').read()
+
+ # Determine the current version of the client (look at the LATEST_API_VERSION constant value)
+ version = re.search("LATEST_API_VERSION = '(.+)'", contents).groups(0)[0]
+
+ # Determine the name of the service description used by the client
+ matches = re.search("__DIR__ \. '/Resources/(.+)\.php'", contents)
+ description = matches.groups(0)[0] % (version)
+
+ # Strip the filename of the client and determine the description path
+ service_path = "/".join(client_path.split(os.sep)[0:-1])
+ service_path += "/Resources/" + description + ".php"
+
+ return service_path
+
+ def load_description(self, path):
+ """Determines the filename to load for a service
+
+ :param path: Path to a service description to load
+ """
+ return self.__load_php(path)
+
+ def __load_php(self, path):
+ """Load a PHP script that returns an array using JSON
+
+ :param path: Path to the script to load
+ """
+ path = os.path.abspath(path)
+
+ # Make command to each environment Linux/Mac and Windows
+ if os.name == 'nt':
+ sh = 'php -r \"$c = include \'' + path + '\'; echo json_encode($c);\"'
+ else:
+ sh = 'php -r \'$c = include "' + path + '"; echo json_encode($c);\''
+
+ loaded = subprocess.check_output(sh, shell=True)
+ return json.loads(loaded)
+
+ def __getitem__(self, i):
+ """Allows access to the service description items via the class"""
+ return self.description.get(i)
+
+
+def load_service_description(name):
+ if name not in description_cache:
+ description_cache[name] = ServiceDescription(name)
+ return description_cache[name]
+
+
+class ServiceDescriptionDirective(Directive):
+ """
+ Base class for directives that use information from service descriptions
+ """
+
+ required_arguments = 1
+ optional_arguments = 1
+ final_argument_whitespace = True
+
+ def run(self):
+ if len(self.arguments) == 2:
+ api_version = self.arguments[1].strip()
+ else:
+ api_version = ""
+ service_name = self.arguments[0].strip()
+ service_description = load_service_description(service_name)
+
+ rawtext = self.generate_rst(service_description, api_version)
+ tab_width = 4
+ include_lines = statemachine.string2lines(
+ rawtext, tab_width, convert_whitespace=1)
+ self.state_machine.insert_input(
+ include_lines, os.path.abspath(__file__))
+ return []
+
+ def get_service_doc_url(self, namespace):
+ """Determine the documentation link for a service"""
+ namespace = namespace.lower()
+ if namespace == "sts":
+ return "http://aws.amazon.com/documentation/iam/"
+ else:
+ return "http://aws.amazon.com/documentation/" + namespace
+
+ def get_api_ref_url(self, namespace):
+ """Determine the PHP API documentation link for a service"""
+ return "http://docs.aws.amazon.com/aws-sdk-php/latest/class-Aws." + namespace + "." + namespace + "Client.html"
+
+ def get_locator_name(self, name):
+ """Determine the service locator name for an endpoint"""
+ return name
+
+
+class ServiceIntro(ServiceDescriptionDirective):
+ """
+ Creates a service introduction to inject into a document
+ """
+
+ def generate_rst(self, d, api_version):
+ rawtext = ""
+ scalar = {}
+
+ # Grab all of the simple strings from the description
+ for key in d.description:
+ if isinstance(d[key], str) or isinstance(d[key], unicode):
+ scalar[key] = d[key]
+ # Add substitutions for top-level data in a service description
+ rawtext += ".. |%s| replace:: %s\n\n" % (key, scalar[key])
+
+ # Determine the doc URL
+ docs = self.get_service_doc_url(d["namespace"])
+
+ # Determine the "namespace" used for linking to API docs
+ if api_version:
+ apiVersionSuffix = "_" + api_version.replace("-", "_")
+ else:
+ apiVersionSuffix = ""
+
+ env = Environment(loader=PackageLoader('aws', 'templates'))
+ template = env.get_template("client_intro")
+ rawtext += template.render(
+ scalar,
+ regions=get_regions(d["namespace"]),
+ doc_url=docs,
+ specifiedApiVersion=api_version,
+ apiVersionSuffix=apiVersionSuffix)
+
+ return rawtext
+
+
+class ServiceApiRef(ServiceDescriptionDirective):
+ """
+ Inserts a formatted PHPUnit example into the source
+ """
+
+ def generate_rst(self, d, api_version):
+ rawtext = ""
+ scalar = {}
+ # Sort the operations by key
+ operations = collections.OrderedDict(sorted(d.description['operations'].items()))
+
+ # Grab all of the simple strings from the description
+ for key in d.description:
+ if isinstance(d[key], str) or isinstance(d[key], unicode):
+ scalar[key] = d[key]
+ # Add substitutions for top-level data in a service description
+ rawtext += ".. |%s| replace:: %s\n\n" % (key, scalar[key])
+
+ # Add magic methods to each operation
+ for key in operations:
+ operations[key]['magicMethod'] = key[0].lower() + key[1:]
+
+ # Set the ordered dict of operations on the description
+ d.description['operations'] = operations
+
+ # Determine the "namespace" used for linking to API docs
+ if api_version:
+ apiVersionSuffix = "_" + api_version.replace("-", "_")
+ else:
+ apiVersionSuffix = ""
+
+ env = Environment(loader=PackageLoader('aws', 'templates'))
+ template = env.get_template("api_reference")
+ rawtext += template.render(
+ scalar,
+ description=d.description,
+ regions=get_regions(d["namespace"]),
+ apiVersionSuffix=apiVersionSuffix)
+
+ return rawtext
+
+
+class ServiceIndexLinks(ServiceDescriptionDirective):
+ """
+ Inserts a formatted PHPUnit example into the source
+ """
+
+ def generate_rst(self, service_description, api_version):
+ d = service_description.description
+
+ service_name = d["serviceFullName"]
+ if "serviceAbbreviation" in d:
+ service_name = d["serviceAbbreviation"]
+
+ rawtext = "* :doc:`Using the " + service_name + " PHP client `\n";
+ rawtext += "* `PHP API reference <" + self.get_api_ref_url(d["namespace"]) + ">`_\n";
+ #rawtext += "* `General service documentation for " + service_name + " <" + self.get_service_doc_url(d["namespace"]) + ">`_\n";
+
+ return rawtext
+
+
+class ExampleDirective(Directive):
+ """
+ Inserts a formatted PHPUnit example into the source
+ """
+
+ # Directive configuration
+ required_arguments = 2
+ optional_arguments = 0
+ final_argument_whitespace = True
+
+ def run(self):
+ self.end_function = " }\n"
+ self.begin_tag = " // @begin\n"
+ self.end_tag = " // @end\n"
+
+ example_file = self.arguments[0].strip()
+ example_name = self.arguments[1].strip()
+
+ if not example_name:
+ raise ValueError("Must specify both an example file and example name")
+
+ contents = self.load_example(example_file, example_name)
+ rawtext = self.generate_rst(contents)
+ tab_width = 4
+ include_lines = statemachine.string2lines(
+ rawtext, tab_width, convert_whitespace=1)
+ self.state_machine.insert_input(
+ include_lines, os.path.abspath(__file__))
+ return []
+
+ def load_example(self, example_file, example_name):
+ """Loads the contents of an example and strips out non-example parts"""
+ key = example_file + '.' + example_name
+
+ # Check if this example is cached already
+ if key in example_cache:
+ return example_cache[key]
+
+ # Not cached, so index the example file functions
+ path = os.path.abspath(__file__ + "/../../../../tests/Aws/Tests/" + example_file)
+
+ f = open(path, 'r')
+ in_example = False
+ capturing = False
+ buffer = ""
+
+ # Scan each line of the file and create example hashes
+ for line in f:
+ if in_example:
+ if line == self.end_function:
+ if in_example:
+ example_cache[in_example] = buffer
+ buffer = ""
+ in_example = False
+ elif line == self.begin_tag:
+ # Look for the opening // @begin tag to begin capturing
+ buffer = ""
+ capturing = True
+ elif line == self.end_tag:
+ # Look for the optional closing tag to stop capturing
+ capturing = False
+ elif capturing:
+ buffer += line
+ elif "public function test" in line:
+ # Grab the function name from the line and keep track of the
+ # name of the current example being captured
+ current_name = re.search('function (.+)\s*\(', line).group(1)
+ in_example = example_file + "." + current_name
+ f.close()
+ return example_cache[key]
+
+ def generate_rst(self, contents):
+ rawtext = ".. code-block:: php\n\n" + contents
+ return rawtext
diff --git a/vendor/aws/aws-sdk-php/docs/_ext/aws/templates/api_reference b/vendor/aws/aws-sdk-php/docs/_ext/aws/templates/api_reference
new file mode 100644
index 0000000..069a61c
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/_ext/aws/templates/api_reference
@@ -0,0 +1,28 @@
+
+API Reference
+-------------
+
+Please see the `{{ serviceFullName }} Client API reference `_
+for a details about all of the available methods, including descriptions of the inputs and outputs.
+
+{# Here we are creating a list-table. The contents of a list-table looks like:
+ * - Foo
+ - Bar
+ * - Baz
+ - Bam
+
+ We must also ensure that the same number of columns are available for each table row.
+#}
+
+.. list-table::
+ :header-rows: 0
+ :stub-columns: 0
+ :class: two-column
+
+ {% for key, op in description.operations.iteritems() %}
+ {% if loop.index is odd %}* {% else %} {% endif %}- `{{ key }} `_
+ {%- if op.documentationUrl %} (`service docs <{{ op.documentationUrl}}>`_){%- endif %}
+ {%- if loop.last and loop.index is odd %}
+ -
+ {%- endif %}
+ {% endfor %}
diff --git a/vendor/aws/aws-sdk-php/docs/_ext/aws/templates/client_intro b/vendor/aws/aws-sdk-php/docs/_ext/aws/templates/client_intro
new file mode 100644
index 0000000..92c93a4
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/_ext/aws/templates/client_intro
@@ -0,0 +1,72 @@
+====================================================================================
+{{serviceFullName}}{% if specifiedApiVersion %} ({{specifiedApiVersion}}){% endif %}
+====================================================================================
+
+This guide focuses on the AWS SDK for PHP client for `{{ serviceFullName }} <{{ doc_url }}>`_. This guide assumes that
+you have already downloaded and installed the AWS SDK for PHP. See :doc:`installation` for more information on
+getting started.
+
+{% if specifiedApiVersion %}
+**Note:** This guide is for the **{{ specifiedApiVersion }}** API version of {{ serviceFullName }}. You may also be
+interested in the :doc:`guide for the latest API version of {{ serviceFullName }} `.
+{% endif %}
+
+Creating a client
+-----------------
+
+First you need to create a client object using one of the following techniques.
+
+Factory method
+~~~~~~~~~~~~~~
+
+The easiest way to get up and running quickly is to use the ``Aws\{{namespace}}\{{namespace}}Client::factory()`` method
+and provide your credential profile (via the ``profile`` option), which identifies the set of credentials you want to
+use from your ``~/.aws/credentials`` file (see :ref:`credential_profiles`).
+
+{% if not globalEndpoint -%}
+A ``region`` parameter is also required and must be set to one of the following values: ``{{ regions|join("``, ``") }}``
+{% endif %}
+
+.. code-block:: php
+
+ use Aws\{{namespace}}\{{namespace}}Client;
+
+ $client = {{namespace}}Client::factory(array(
+ 'profile' => ''{% if not globalEndpoint -%},
+ 'region' => ''{% endif %}{% if specifiedApiVersion -%},
+ 'version' => '{{specifiedApiVersion}}'{% endif %}
+ ));
+
+You can provide your credential profile like in the preceding example, specify your access keys directly (via ``key``
+and ``secret``), or you can choose to omit any credential information if you are using `AWS Identity and Access
+Management (IAM) roles for EC2 instances `_
+or credentials sourced from the ``AWS_ACCESS_KEY_ID`` and ``AWS_SECRET_ACCESS_KEY`` environment variables.
+
+.. note::
+
+ The ``profile`` option and AWS credential file support is only available for version 2.6.1 of the SDK and higher.
+ We recommend that all users update their copies of the SDK to take advantage of this feature, which is a safer way
+ to specify credentials than explicitly providing ``key`` and ``secret``.
+
+Service builder
+~~~~~~~~~~~~~~~
+
+A more robust way to connect to {{ serviceFullName }} is through the service builder. This allows you to specify
+credentials and other configuration settings in a configuration file. These settings can then be shared across all
+clients so that you only have to specify your settings once.
+
+.. code-block:: php
+
+ use Aws\Common\Aws;
+
+ // Create a service builder using a configuration file
+ $aws = Aws::factory('/path/to/my_config.json');
+
+ // Get the client from the builder by namespace
+ {% if specifiedApiVersion -%}
+ $client = $aws->get('{{ namespace|lower() }}_{{ apiVersionSuffix|replace("_", "") }}');
+ {% else -%}
+ $client = $aws->get('{{ namespace }}');
+ {% endif %}
+
+.. _{{ namespace }}{{ apiVersionSuffix }}_operations:
diff --git a/vendor/aws/aws-sdk-php/docs/_snippets/incomplete.txt b/vendor/aws/aws-sdk-php/docs/_snippets/incomplete.txt
new file mode 100644
index 0000000..c655a2d
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/_snippets/incomplete.txt
@@ -0,0 +1,10 @@
+------------------------------
+
+.. admonition:: This guide is incomplete
+
+ This guide is not quite finished. If you are looking for a good way to contribute to the SDK and to the rest of
+ the AWS PHP community, then helping to write documentation is a great place to start. Our guides are written
+ in `ReStructuredText `_ and generated using
+ `Sphinx `_. Feel free to add some content to our documentation and send a pull request
+ to https://github.com/aws/aws-sdk-php. You can view our documentation sources at
+ https://github.com/aws/aws-sdk-php/tree/master/docs.
diff --git a/vendor/aws/aws-sdk-php/docs/_snippets/iterators-intro.txt b/vendor/aws/aws-sdk-php/docs/_snippets/iterators-intro.txt
new file mode 100644
index 0000000..9f16e63
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/_snippets/iterators-intro.txt
@@ -0,0 +1,21 @@
+Some AWS operations return truncated results that require subsequent requests in order to retrieve the entire result
+set. The subsequent requests typically require pagination tokens or markers from the previous request in order to
+retrieve the next set of results. Working with these tokens can be cumbersome, since you must manually keep track of
+them, and the API for each service you are using may differ in the names and details of the tokens.
+
+The AWS SDK for PHP has a feature called **Iterators** that allow you to retrieve an *entire* result set without
+manually handling pagination tokens or markers. The Iterators in the SDK implement PHP's ``Iterator`` interface, which
+allows you to easily enumerate or iterate through resources from a result set with ``foreach``.
+
+Operations that start with ``List`` or ``Describe``, or any other operations that are designed to return multiple
+records can be used with Iterators. To use an Iterator, you must call the ``getIterator()`` method of the client and
+provide the operation name. The following is an example of creating an Amazon S3 ``ListObjects`` Iterator, to iterate
+over objects in a bucket.
+
+.. code-block:: php
+
+ $iterator = $client->getIterator('ListObjects', array('Bucket' => 'my-bucket'));
+
+ foreach ($iterator as $object) {
+ echo $object['Key'] . "\n";
+ }
diff --git a/vendor/aws/aws-sdk-php/docs/_snippets/models-intro.txt b/vendor/aws/aws-sdk-php/docs/_snippets/models-intro.txt
new file mode 100644
index 0000000..68db25b
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/_snippets/models-intro.txt
@@ -0,0 +1,32 @@
+The result of a performing an operation is what we refer to as a **modeled response**. Instead of returning the raw XML
+or JSON data, the SDK will coerce the data into an associative array and normalize some aspects of the data based on its
+knowledge of the specific service and the underlying response structure.
+
+The actual value returned is a `Model `_
+(``Guzzle\Service\Resource\Model``) object. The Model class is a part of the SDK's underlying Guzzle library, but you do
+not need to know anything about Guzzle to use your operation results. The Model object contains the data from the
+response and can be used like an array (e.g., ``$result['Table']``). It also has convenience methods like ``get()``,
+``getPath()``, and ``toArray()``. The contents of the modeled response depend on the operation that was executed and are
+documented in the API docs for each operation (e.g., see the *Returns* section in the API docs for the `DynamoDB
+DescribeTable operation `_).
+
+.. code-block:: php
+
+ $result = $dynamoDbClient->describeTable(array(
+ 'TableName' => 'YourTableName',
+ ));
+
+ // Get a specific value from the result
+ $table = $result['Table'];
+ if ($table && isset($table['TableStatus'])) {
+ echo $table['TableStatus'];
+ }
+ //> ACTIVE
+
+ // Get nested values from the result easily
+ echo $result->getPath('Table/TableStatus');
+ //> ACTIVE
+
+ // Convert the Model to a plain array
+ var_export($result->toArray());
+ //> array ( 'Table' => array ( 'AttributeDefinitions' => array ( ... ) ... ) ... )
diff --git a/vendor/aws/aws-sdk-php/docs/_snippets/performing-operations.txt b/vendor/aws/aws-sdk-php/docs/_snippets/performing-operations.txt
new file mode 100644
index 0000000..dfbd064
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/_snippets/performing-operations.txt
@@ -0,0 +1,14 @@
+You can perform a service **operation** by calling the method of the same name on the client object. For example, to
+perform the `Amazon DynamoDB DescribeTable operation
+`_, you must call the
+``Aws\DynamoDb\DynamoDbClient::describeTable()`` method. Operation methods, like ``describeTable()``, all accept a
+single argument that is an associative array of values representing the parameters to the operation. The structure of
+this array is defined for each operation in the SDK's `API Documentation `_
+(e.g., see the `API docs for describeTable()
+`_).
+
+.. code-block:: php
+
+ $result = $dynamoDbClient->describeTable(array(
+ 'TableName' => 'YourTableName',
+ ));
diff --git a/vendor/aws/aws-sdk-php/docs/_snippets/waiters-intro.txt b/vendor/aws/aws-sdk-php/docs/_snippets/waiters-intro.txt
new file mode 100644
index 0000000..c0cbe35
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/_snippets/waiters-intro.txt
@@ -0,0 +1,15 @@
+One of the higher-level abstractions provided by the SDK are **Waiters**. Waiters help make it easier to work with
+*eventually consistent* systems by providing an easy way to wait until a resource enters into a particular state by
+polling the resource. You can find a list of the waiters supported by a client by viewing the API Documentation of a
+service client. Any method with a name starting with "``waitUntil``" will create and invoke a Waiter.
+
+In the following example, the Amazon S3 Client is used to create a bucket. Then the Waiter is used to wait until the
+bucket exists.
+
+.. code-block:: php
+
+ // Create a bucket
+ $s3Client->createBucket(array('Bucket' => 'my-bucket'));
+
+ // Wait until the created bucket is available
+ $s3Client->waitUntil('BucketExists', array('Bucket' => 'my-bucket'));
diff --git a/vendor/aws/aws-sdk-php/docs/_static/logo.png b/vendor/aws/aws-sdk-php/docs/_static/logo.png
new file mode 100644
index 0000000..684f30f
Binary files /dev/null and b/vendor/aws/aws-sdk-php/docs/_static/logo.png differ
diff --git a/vendor/aws/aws-sdk-php/docs/_templates/feedback.html b/vendor/aws/aws-sdk-php/docs/_templates/feedback.html
new file mode 100644
index 0000000..34bf965
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/_templates/feedback.html
@@ -0,0 +1,16 @@
+
+
Feedback
+
Did you find this page useful? Do you have a suggestion? Give us feedback or
+ send us a pull request on GitHub.
diff --git a/vendor/aws/aws-sdk-php/docs/awssignup.rst b/vendor/aws/aws-sdk-php/docs/awssignup.rst
new file mode 100644
index 0000000..29d4423
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/awssignup.rst
@@ -0,0 +1,62 @@
+==================
+Signing Up for AWS
+==================
+
+.. important:: This page is obsolete. Please see `About Access Keys `_.
+
+Creating an AWS account
+-----------------------
+
+Before you begin, you need to create an account. When you sign up for AWS, AWS signs your account up for all services.
+You are charged only for the services you use.
+
+To sign up for AWS
+~~~~~~~~~~~~~~~~~~
+
+#. Go to http://aws.amazon.com and click **Sign Up Now**.
+
+#. Follow the on-screen instructions.
+
+AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account
+activity and manage your account at http://aws.amazon.com/account. From the **My Account** page, you can view current
+charges and account activity and download usage reports.
+
+To view your AWS credentials
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+#. Go to http://aws.amazon.com/.
+
+#. Click **My Account/Console**, and then click **Security Credentials**.
+
+#. Under **Your Account**, click **Security Credentials**.
+
+#. In the spaces provided, type your user name and password, and then click **Sign in using our secure server**.
+
+#. Under **Access Credentials**, on the **Access Keys** tab, your access key ID is displayed. To view your secret key,
+ under **Secret Access Key**, click **Show**.
+
+Your secret key must remain a secret that is known only by you and AWS. Keep it confidential in order to protect your
+account. Store it securely in a safe place, and never email it. Do not share it outside your organization, even if an
+inquiry appears to come from AWS or Amazon.com. No one who legitimately represents Amazon will ever ask you for your
+secret key.
+
+Getting your AWS credentials
+----------------------------
+
+In order to use the AWS SDK for PHP, you need your AWS Access Key ID and Secret Access Key.
+
+To get your AWS Access Key ID and Secret Access Key
+
+- Go to http://aws.amazon.com/.
+- Click **Account** and then click **Security Credentials**. The Security Credentials page displays (you might be
+ prompted to log in).
+- Scroll down to Access Credentials and make sure the **Access Keys** tab is selected. The AWS Access Key ID appears in
+ the Access Key column.
+- To view the Secret Access Key, click **Show**.
+
+.. note::
+
+ **Important: Your Secret Access Key is a secret**, which only you and AWS should know. It is important to keep it confidential
+ to protect your account. Store it securely in a safe place. Never include it in your requests to AWS, and never
+ e-mail it to anyone. Do not share it outside your organization, even if an inquiry appears to come from AWS or
+ Amazon.com. No one who legitimately represents Amazon will ever ask you for your Secret Access Key.
diff --git a/vendor/aws/aws-sdk-php/docs/conf.py b/vendor/aws/aws-sdk-php/docs/conf.py
new file mode 100644
index 0000000..73f5c66
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/conf.py
@@ -0,0 +1,269 @@
+# -*- coding: utf-8 -*-
+#
+# AWS SDK for PHP documentation build configuration file, created by
+# sphinx-quickstart on Mon Dec 10 19:00:11 2012.
+#
+# This file is execfile()d with the current directory set to its containing dir.
+#
+# Note that not all possible configuration values are present in this
+# autogenerated file.
+#
+# All configuration values have a default; values that are commented out
+# serve to show the default.
+
+import sys, os, subprocess
+
+# Don't require opening PHP tags in PHP examples
+from sphinx.highlighting import lexers
+from pygments.lexers.web import PhpLexer
+lexers['php'] = PhpLexer(startinline=True, linenos=1)
+lexers['php-annotations'] = PhpLexer(startinline=True, linenos=1)
+primary_domain = 'php'
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+#sys.path.insert(0, os.path.abspath('.'))
+
+# -- General configuration -----------------------------------------------------
+
+# If your documentation needs a minimal Sphinx version, state it here.
+#needs_sphinx = '1.0'
+
+# Add our custom extensions
+sys.path.append(os.path.abspath('_ext/'))
+
+# Add any Sphinx extension module names here, as strings. They can be extensions
+# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
+extensions = ['aws', 'rst2pdf.pdfbuilder']
+
+# index, rst2pdf, title, author
+pdf_documents = [('index', u'aws-sdk-php-guide', u'AWS SDK for PHP', u'Amazon Web Services')]
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ['_templates']
+
+# The suffix of source filenames.
+source_suffix = '.rst'
+
+# The encoding of source files.
+#source_encoding = 'utf-8-sig'
+
+# The master toctree document.
+master_doc = 'index'
+
+# General information about the project.
+project = u'AWS SDK for PHP'
+copyright = u'2013, Amazon Web Services'
+
+# The version info for the project you're documenting, acts as replacement for
+# |version| and |release|, also used in various other places throughout the
+# built documents.
+#
+# The short X.Y version.
+version = subprocess.check_output('git describe --abbrev=0 --tags', shell=True).strip()
+# The full version, including alpha/beta/rc tags.
+release = version
+
+# The language for content autogenerated by Sphinx. Refer to documentation
+# for a list of supported languages.
+#language = None
+
+# There are two options for replacing |today|: either, you set today to some
+# non-false value, then it is used:
+#today = ''
+# Else, today_fmt is used as the format for a strftime call.
+#today_fmt = '%B %d, %Y'
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+exclude_patterns = ['_build']
+
+# The reST default role (used for this markup: `text`) to use for all documents.
+#default_role = None
+
+# If true, '()' will be appended to :func: etc. cross-reference text.
+#add_function_parentheses = True
+
+# If true, the current module name will be prepended to all description
+# unit titles (such as .. function::).
+#add_module_names = True
+
+# If true, sectionauthor and moduleauthor directives will be shown in the
+# output. They are ignored by default.
+#show_authors = False
+
+# A list of ignored prefixes for module index sorting.
+#modindex_common_prefix = []
+
+# -- Options for HTML output ---------------------------------------------------
+
+# The theme to use for HTML and HTML Help pages. See the documentation for
+# a list of builtin themes.
+
+# Theme options are theme-specific and customize the look and feel of a theme
+# further. For a list of options available for each theme, see the
+# documentation.
+#html_theme_options = {}
+
+# Add any paths that contain custom themes here, relative to this directory.
+#html_theme_path = []
+
+# The name for this set of Sphinx documents. If None, it defaults to
+# " v documentation".
+#html_title = None
+
+# A shorter title for the navigation bar. Default is the same as html_title.
+#html_short_title = None
+
+# The name of an image file (relative to this directory) to place at the top
+# of the sidebar.
+#html_logo = None
+
+# The name of an image file (within the static path) to use as favicon of the
+# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
+# pixels large.
+#html_favicon = None
+
+# Add any paths that contain custom static files (such as style sheets) here,
+# relative to this directory. They are copied after the builtin static files,
+# so a file named "default.css" will overwrite the builtin "default.css".
+html_static_path = ['_static']
+
+# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
+# using the given strftime format.
+#html_last_updated_fmt = '%b %d, %Y'
+
+# If true, SmartyPants will be used to convert quotes and dashes to
+# typographically correct entities.
+#html_use_smartypants = True
+
+# Custom sidebar templates, maps document names to template names.
+html_sidebars = {
+ '**': ['sidebarlogo.html', 'localtoc.html', 'searchbox.html', 'feedback.html']
+}
+
+# Additional templates that should be rendered to pages, maps page names to
+# template names.
+#html_additional_pages = {}
+
+# If false, no module index is generated.
+#html_domain_indices = True
+
+# If false, no index is generated.
+#html_use_index = True
+
+# If true, the index is split into individual pages for each letter.
+#html_split_index = False
+
+# If true, links to the reST sources are added to the pages.
+html_show_sourcelink = False
+
+# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
+#html_show_sphinx = True
+
+# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
+#html_show_copyright = True
+
+# If true, an OpenSearch description file will be output, and all pages will
+# contain a tag referring to it. The value of this option must be the
+# base URL from which the finished HTML is served.
+#html_use_opensearch = ''
+
+# This is the file name suffix for HTML files (e.g. ".xhtml").
+#html_file_suffix = None
+
+# Output file base name for HTML help builder.
+htmlhelp_basename = 'AWSSDKforPHPdoc'
+
+
+# -- Options for LaTeX output --------------------------------------------------
+
+latex_elements = {
+# The paper size ('letterpaper' or 'a4paper').
+#'papersize': 'letterpaper',
+
+# The font size ('10pt', '11pt' or '12pt').
+#'pointsize': '10pt',
+
+# Additional stuff for the LaTeX preamble.
+#'preamble': '',
+}
+
+# Grouping the document tree into LaTeX files. List of tuples
+# (source start file, target name, title, author, documentclass [howto/manual]).
+latex_documents = [
+ ('index', 'AWSSDKforPHP.tex', u'AWS SDK for PHP Documentation',
+ u'Amazon Web Services', 'manual'),
+]
+
+# The name of an image file (relative to this directory) to place at the top of
+# the title page.
+#latex_logo = None
+
+# For "manual" documents, if this is true, then toplevel headings are parts,
+# not chapters.
+#latex_use_parts = False
+
+# If true, show page references after internal links.
+#latex_show_pagerefs = False
+
+# If true, show URL addresses after external links.
+#latex_show_urls = False
+
+# Documents to append as an appendix to all manuals.
+#latex_appendices = []
+
+# If false, no module index is generated.
+#latex_domain_indices = True
+
+
+# -- Options for manual page output --------------------------------------------
+
+# One entry per manual page. List of tuples
+# (source start file, name, description, authors, manual section).
+man_pages = [
+ ('index', 'awssdkforphp', u'AWS SDK for PHP Documentation',
+ [u'Amazon Web Services'], 1)
+]
+
+# If true, show URL addresses after external links.
+#man_show_urls = False
+
+
+# -- Options for Texinfo output ------------------------------------------------
+
+# Grouping the document tree into Texinfo files. List of tuples
+# (source start file, target name, title, author,
+# dir menu entry, description, category)
+texinfo_documents = [
+ ('index', 'AWSSDKforPHP', u'AWS SDK for PHP Documentation',
+ u'Amazon Web Services', 'AWSSDKforPHP', 'One line description of project.',
+ 'Miscellaneous'),
+]
+
+# Documents to append as an appendix to all manuals.
+#texinfo_appendices = []
+
+# If false, no module index is generated.
+#texinfo_domain_indices = True
+
+# How to display URL addresses: 'footnote', 'no', or 'inline'.
+#texinfo_show_urls = 'footnote'
+
+# -- HTML theme settings ------------------------------------------------
+import guzzle_sphinx_theme
+extensions.append("guzzle_sphinx_theme")
+html_translator_class = 'guzzle_sphinx_theme.HTMLTranslator'
+html_theme_path = guzzle_sphinx_theme.html_theme_path()
+html_theme = 'guzzle_sphinx_theme'
+
+# Guzzle theme options (see theme.conf for more information)
+html_theme_options = {
+ # hack to add tracking
+ "google_analytics_account": os.getenv('TRACKING', False),
+ "project_nav_name": "AWS SDK for PHP",
+ "github_user": "aws",
+ "github_repo": "aws-sdk-php",
+ "base_url": "http://docs.aws.amazon.com/aws-sdk-php/guide/latest/"
+}
diff --git a/vendor/aws/aws-sdk-php/docs/configuration.rst b/vendor/aws/aws-sdk-php/docs/configuration.rst
new file mode 100644
index 0000000..822ae8e
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/configuration.rst
@@ -0,0 +1,316 @@
+Configuring the SDK
+===================
+
+The AWS SDK for PHP can be configured in many ways to suit your needs. This guide highlights the use of configuration
+files with the service builder as well as individual client configuration options.
+
+Configuration files
+-------------------
+
+How configuration files work
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+When passing an array of parameters to the first argument of ``Aws\Common\Aws::factory()``, the service builder loads
+the default ``aws-config.php`` file and merges the array of shared parameters into the default configuration.
+
+Excerpt from ``src/Aws/Common/Resources/aws-config.php``:
+
+.. code-block:: php
+
+ 'Aws\Common\Aws',
+ 'services' => array(
+ 'default_settings' => array(
+ 'params' => array()
+ ),
+ 'autoscaling' => array(
+ 'alias' => 'AutoScaling',
+ 'extends' => 'default_settings',
+ 'class' => 'Aws\AutoScaling\AutoScalingClient'
+ ),
+ 'cloudformation' => array(
+ 'alias' => 'CloudFormation',
+ 'extends' => 'default_settings',
+ 'class' => 'Aws\CloudFormation\CloudFormationClient'
+ ),
+ // ...
+ );
+
+The ``aws-config.php`` file provides default configuration settings for associating client classes with service names.
+This file tells the ``Aws\Common\Aws`` service builder which class to instantiate when you reference a client by name.
+
+You can supply your credential profile (see :ref:`credential_profiles`) and other configuration settings to the service
+builder so that each client is instantiated with those settings. To do this, pass an array of settings (including your
+``profile``) into the first argument of ``Aws\Common\Aws::factory()``.
+
+.. code-block:: php
+
+ 'my_profile',
+ 'region' => 'us-east-1',
+ ));
+
+Using a custom configuration file
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+You can use a custom configuration file that allows you to create custom named clients with pre-configured settings.
+
+Let's say you want to use the default ``aws-config.php`` settings, but you want to supply your keys using a
+configuration file. Each service defined in the default configuration file extends from ``default_settings`` service.
+You can create a custom configuration file that extends the default configuration file and add credentials to the
+``default_settings`` service:
+
+.. code-block:: php
+
+ array('_aws'),
+ 'services' => array(
+ 'default_settings' => array(
+ 'params' => array(
+ 'profile' => 'my_profile', // Looks up credentials in ~/.aws/credentials
+ 'region' => 'us-west-2'
+ )
+ )
+ )
+ );
+
+Make sure to include the ``'includes' => array('_aws'),`` line in your configuration file, because this extends the
+default configuration that makes all of the service clients available to the service builder. If this is missing, then
+you will get an exception when trying to retrieve a service client.
+
+You can use your custom configuration file with the ``Aws\Common\Aws`` class by passing the full path to the
+configuration file in the first argument of the ``factory()`` method:
+
+.. code-block:: php
+
+ array('_aws'),
+ 'services' => array(
+ 'foo.dynamodb' => array(
+ 'extends' => 'dynamodb',
+ 'params' => array(
+ 'profile' => 'my_profile',
+ 'region' => 'us-west-2'
+ )
+ ),
+ 'bar.dynamodb' => array(
+ 'extends' => 'dynamodb',
+ 'params' => array(
+ 'profile' => 'my_other_profile',
+ 'region' => 'us-west-2'
+ )
+ )
+ )
+ );
+
+If you prefer JSON syntax, you can define your configuration in JSON format instead of PHP.
+
+.. code-block:: js
+
+ {
+ "includes": ["_aws"],
+ "services": {
+ "default_settings": {
+ "params": {
+ "profile": "my_profile",
+ "region": "us-west-2"
+ }
+ }
+ }
+ }
+
+For more information about writing custom configuration files, please see `Using the Service Builder
+`_ in the Guzzle documentation.
+
+Client configuration options
+-----------------------------
+
+Basic client configuration options include your credentials ``profile`` (see :doc:`credentials`) and a ``region``
+(see :ref:`specify_region`). For typical use cases, you will not need to provide more than these 3 options.
+The following represents all of the possible client configuration options for service clients in the SDK.
+
+========================= ==============================================================================================
+Credentials Options
+------------------------------------------------------------------------------------------------------------------------
+Options Description
+========================= ==============================================================================================
+``profile`` The AWS credential profile associated with the credentials you want to use. The profile is
+ used to look up your credentials in your credentials file (``~/.aws/credentials``). See
+ :ref:`credential_profiles` for more information.
+
+``key`` An AWS access key ID. Unless you are setting temporary credentials provided by AWS STS, it is
+ recommended that you avoid hard-coding credentials with this parameter. Please see
+ :doc:`credentials` for my information about credentials.
+
+``secret`` An AWS secret access key. Unless you are setting temporary credentials provided by AWS STS, it
+ is recommended that you avoid hard-coding credentials with this parameter. Please see
+ :doc:`credentials` for my information about credentials.
+
+``token`` An AWS security token to use with request authentication. These are typically provided by the
+ AWS STS service. Please note that not all services accept temporary credentials.
+ See http://docs.aws.amazon.com/STS/latest/UsingSTS/UsingTokens.html.
+
+``token.ttd`` The UNIX timestamp for when the provided credentials expire.
+
+``credentials`` A credentials object (``Aws\Common\Credentials\CredentialsInterface``) can be provided instead
+ explicit access keys and tokens.
+
+``credentials.cache.key`` Optional custom cache key to use with the credentials.
+
+``credentials.client`` Pass this option to specify a custom ``Guzzle\Http\ClientInterface`` to use if your
+ credentials require a HTTP request (e.g. ``RefreshableInstanceProfileCredentials``).
+========================= ==============================================================================================
+
+========================= ==============================================================================================
+Endpoint and Signature Options
+------------------------------------------------------------------------------------------------------------------------
+Options Description
+========================= ==============================================================================================
+``region`` Region name (e.g., 'us-east-1', 'us-west-1', 'us-west-2', 'eu-west-1', etc.).
+ See :ref:`specify_region`.
+
+``scheme`` URI Scheme of the base URL (e.g.. 'https', 'http') used when base_url is not supplied.
+
+``base_url`` Allows you to specify a custom endpoint instead of have the SDK build one automatically from
+ the region and scheme.
+
+``signature`` Overrides the signature used by the client. Clients will always choose an appropriate default
+ signature. However, it can be useful to override this with a custom setting. This can be set
+ to "v4", "v3https", "v2" or an instance of ``Aws\Common\Signature\SignatureInterface``.
+
+``signature.service`` The signature service scope for Signature V4. See :ref:`custom_endpoint`.
+
+``signature.region`` The signature region scope for Signature V4. See :ref:`custom_endpoint`.
+========================= ==============================================================================================
+
+================================== =====================================================================================
+Generic Client Options
+------------------------------------------------------------------------------------------------------------------------
+Options Description
+================================== =====================================================================================
+``ssl.certificate_authority`` Set to true to use the SDK bundled SSL certificate bundle (this is used by default),
+ ``'system'`` to use the bundle on your system, a string pointing to a file to use a
+ specific certificate file, a string pointing to a directory to use multiple
+ certificates, or false to disable SSL validation (not recommended).
+
+ When using the ``aws.phar``, the bundled SSL certificate will be extracted to your
+ system's temp folder, and each time a client is created an MD5 check will be
+ performed to ensure the integrity of the certificate.
+
+``curl.options`` Associative array of cURL options to apply to every request created by the client.
+ If either the key or value of an entry in the array is a string, Guzzle will attempt
+ to find a matching defined cURL constant automatically (e.g. ``"CURLOPT_PROXY"`` will
+ be converted to the constant ``CURLOPT_PROXY``).
+
+``request.options`` Associative array of `Guzzle request options
+ `_ to
+ apply to every request created by the client.
+
+``command.params`` An associative array of default options to set on each command created by the client.
+
+``client.backoff.logger`` A ``Guzzle\Log\LogAdapterInterface`` object used to log backoff retries. Use
+ ``'debug'`` to emit PHP warnings when a retry is issued.
+
+``client.backoff.logger.template`` Optional template to use for exponential backoff log messages. See the
+ ``Guzzle\Plugin\Backoff\BackoffLogger`` class for formatting information.
+================================== =====================================================================================
+
+.. _specify_region:
+
+Specifying a region
+~~~~~~~~~~~~~~~~~~~
+
+Some clients require a ``region`` configuration setting. You can find out if the client you are using requires a region
+and the regions available to a client by consulting the documentation for that particular client
+(see :ref:`supported-services`).
+
+Here's an example of creating an Amazon DynamoDB client that uses the ``us-west-1`` region:
+
+.. code-block:: php
+
+ require 'vendor/autoload.php';
+
+ use Aws\DynamoDb\DynamoDbClient;
+
+ // Create a client that uses the us-west-1 region
+ $client = DynamoDbClient::factory(array(
+ 'key' => 'YOUR_AWS_ACCESS_KEY_ID',
+ 'secret' => 'YOUR_AWS_SECRET_ACCESS_KEY',
+ 'region' => 'us-west-1'
+ ));
+
+.. _custom_endpoint:
+
+Setting a custom endpoint
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+You can specify a completely customized endpoint for a client using the client's ``base_url`` option. If the client you
+are using requires a region, then must still specify the name of the region using the ``region`` option. Setting a
+custom endpoint can be useful if you're using a mock web server that emulates a web service, you're testing against a
+private beta endpoint, or you are trying to a use a new region not yet supported by the SDK.
+
+Here's an example of creating an Amazon DynamoDB client that uses a completely customized endpoint:
+
+.. code-block:: php
+
+ require 'vendor/autoload.php';
+
+ use Aws\DynamoDb\DynamoDbClient;
+
+ // Create a client that that contacts a completely customized base URL
+ $client = DynamoDbClient::factory(array(
+ 'base_url' => 'http://my-custom-url',
+ 'region' => 'my-region-1',
+ 'key' => 'abc',
+ 'secret' => '123'
+ ));
+
+If your custom endpoint uses signature version 4 and must be signed with custom signature scoping values, then you can
+specify the signature scoping values using ``signature.service`` (the scoped name of the service) and
+``signature.region`` (the region that you are contacting). These values are typically not required.
+
+.. _using_proxy:
+
+Using a proxy
+~~~~~~~~~~~~~
+
+You can send requests with the AWS SDK for PHP through a proxy using the "request options" of a client. These
+"request options" are applied to each HTTP request sent from the client. One of the option settings that can be
+specified is the `proxy` option.
+
+Request options are passed to a client through the client's factory method:
+
+.. code-block:: php
+
+ use Aws\S3\S3Client;
+
+ $s3 = S3Client::factory(array(
+ 'request.options' => array(
+ 'proxy' => '127.0.0.1:123'
+ )
+ ));
+
+The above example tells the client that all requests should be proxied through an HTTP proxy located at the
+`127.0.0.1` IP address using port `123`.
+
+You can supply a username and password when specifying your proxy setting if needed, using the format of
+``username:password@host:port``.
diff --git a/vendor/aws/aws-sdk-php/docs/credentials.rst b/vendor/aws/aws-sdk-php/docs/credentials.rst
new file mode 100644
index 0000000..24f1018
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/credentials.rst
@@ -0,0 +1,453 @@
+Providing Credentials to the SDK
+================================
+
+Introduction
+------------
+
+In order to authenticate requests, AWS services require you to provide your `AWS access keys
+`_, also known as your AWS **access key ID** and **secret access key**.
+In the AWS SDK for PHP, these access keys are often referred to collectively as your **credentials**. This guide
+demonstrates how to provide your credentials to the AWS SDK for SDK.
+
+There are many ways to provide credentials:
+
+#. :ref:`environment_credentials`
+#. :ref:`instance_profile_credentials`
+#. :ref:`credential_profiles`
+#. :ref:`configuration_credentials`
+#. :ref:`factory_credentials`
+#. :ref:`set_credentials`
+#. :ref:`temporary_credentials`
+
+Which technique should you choose?
+----------------------------------
+
+The technique that you use to provide credentials to the SDK for your application is entirely up to you. Please read
+each section on this page to determine what is the best fit for you. What you choose will depend on many different
+factors, including:
+
+* The environment you are operating in (e.g., development, testing, production)
+* The host of your application (e.g., localhost, Amazon EC2, third-party server)
+* How many sets of credentials you are using
+* The type of project you are developing (e.g., application, CLI, library)
+* How often you rotate your credentials
+* If you rely on temporary or federated credentials
+* Your deployment process
+* Your application framework
+
+Regardless of the technique used, it is encouraged that you follow the `IAM Best Practices
+`_ when managing your credentials, including the
+recommendation to not use your AWS account's root credentials. Instead, create separate IAM users with their own access
+keys for each project, and tailor the permissions of the users specific to those projects.
+
+*In general, it is recommended that you use IAM roles when running your application on Amazon EC2 and use credential
+profiles or environment variables elsewhere.*
+
+.. _environment_credentials:
+
+Using credentials from environment variables
+--------------------------------------------
+
+If you do not provide credentials to a client object at the time of its instantiation (e.g., via the client's factory
+method or via a service builder configuration), the SDK will attempt to find credentials in your environment when you
+call your first operation. The SDK will use the ``$_SERVER`` superglobal and/or ``getenv()`` function to look for the
+``AWS_ACCESS_KEY_ID`` and ``AWS_SECRET_ACCESS_KEY`` environment variables. These credentials are referred to as
+**environment credentials**.
+
+.. _instance_profile_credentials:
+
+Using IAM roles for Amazon EC2 instances
+----------------------------------------
+
+*Using IAM roles is the preferred technique for providing credentials to applications running on Amazon EC2.* IAM roles
+remove the need to worry about credential management from your application. They allow an instance to "assume" a role by
+retrieving temporary credentials from the EC2 instance's metadata server. These temporary credentials, often referred to
+as **instance profile credentials**, allow access to the actions and resources that the role's policy allows.
+
+When launching an EC2 instance, you can choose to associate it with an IAM role. Any application running on that EC2
+instance is then allowed to assume the associated role. Amazon EC2 handles all the legwork of securely authenticating
+instances to the IAM service to assume the role and periodically refreshing the retrieved role credentials, keeping your
+application secure with almost no work on your part.
+
+If you do not explicitly provide credentials to the client object and no environment variable credentials are available,
+the SDK attempts to retrieve instance profile credentials from an Amazon EC2 instance metadata server. These credentials
+are available only when running on Amazon EC2 instances that have been configured with an IAM role.
+
+.. note::
+
+ Instance profile credentials and other temporary credentials generated by the AWS Security Token Service (AWS STS)
+ are not supported by every service. Please check if the service you are using supports temporary credentials by
+ reading `AWS Services that Support AWS STS `_.
+
+For more information, see `IAM Roles for Amazon EC2 `_.
+
+.. _caching_credentials:
+
+Caching IAM role credentials
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+While using IAM role credentials is the preferred method for providing credentials to an application running on an
+Amazon EC2 instance, the roundtrip from the application to the instance metadata server on each request can introduce
+latency. In these situations, you might find that utilizing a caching layer on top of your IAM role credentials can
+eliminate the introduced latency.
+
+The easiest way to add a cache to your IAM role credentials is to specify a credentials cache using the
+``credentials.cache`` option in a client's factory method or in a service builder configuration file. The
+``credentials.cache`` configuration setting should be set to an object that implements Guzzle's
+``Guzzle\Cache\CacheAdapterInterface`` (see `Guzzle cache adapters
+`_). This interface provides an
+abstraction layer over various cache backends, including Doctrine Cache, Zend Framework 2 cache, etc.
+
+.. code-block:: php
+
+ $cacheAdapter
+ ));
+
+In the preceding example, the addition of ``credentials.cache`` causes credentials to be cached to the local filesystem
+using `Doctrine's caching system `_. Every request that uses this cache adapter first
+checks if the credentials are in the cache. If the credentials are found in the cache, the client then ensures that the
+credentials are not expired. In the event that cached credentials become expired, the client automatically refreshes the
+credentials on the next request and populates the cache with the updated credentials.
+
+A credentials cache can also be used in a service builder configuration:
+
+.. code-block:: php
+
+ array('_aws'),
+ 'services' => array(
+ 'default_settings' => array(
+ 'params' => array(
+ 'credentials.cache' => $cacheAdapter
+ )
+ )
+ )
+ );
+
+If you were to use the above configuration file with a service builder, then all of the clients created through the
+service builder would utilize a shared credentials cache object.
+
+.. _credential_profiles:
+
+Using the AWS credentials file and credential profiles
+------------------------------------------------------
+
+Starting with the AWS SDK for PHP version 2.6.2, you can use an AWS credentials file to specify your credentials. This
+is a special, INI-formatted file stored under your HOME directory, and is a good way to manage credentials for your
+development environment. The file should be placed at ``~/.aws/credentials``, where ``~`` represents your HOME
+directory.
+
+Using an AWS credentials file offers a few benefits:
+
+1. Your projects' credentials are stored outside of your projects, so there is no chance of accidentally committing
+ them into version control.
+2. You can define and name multiple sets of credentials in one place.
+3. You can easily reuse the same credentials between projects.
+4. Other AWS SDKs and tools support, or will soon support, this same credentials file. This allows you to reuse your
+ credentials with other tools.
+
+The format of the AWS credentials file should look something like the following:
+
+.. code-block:: ini
+
+ [default]
+ aws_access_key_id = YOUR_AWS_ACCESS_KEY_ID
+ aws_secret_access_key = YOUR_AWS_SECRET_ACCESS_KEY
+
+ [project1]
+ aws_access_key_id = ANOTHER_AWS_ACCESS_KEY_ID
+ aws_secret_access_key = ANOTHER_AWS_SECRET_ACCESS_KEY
+
+Each section (e.g., ``[default]``, ``[project1]``), represents a separate credential **profile**. Profiles can be
+referenced from an SDK configuration file, or when you are instantiating a client, using the ``profile`` option:
+
+.. code-block:: php
+
+ 'project1',
+ 'region' => 'us-west-2',
+ ));
+
+If no credentials or profiles were explicitly provided to the SDK and no credentials were defined in environment
+variables, but a credentials file is defined, the SDK will use the "default" profile. You can change the default
+profile by specifying an alternate profile name in the ``AWS_PROFILE`` environment variable.
+
+.. _hardcoded_credentials:
+
+Setting credentials explicitly in your code
+-------------------------------------------
+
+The SDK allows you to explicitly set your credentials in your project in a few different ways. These techniques are
+useful for rapid development, integrating with existing configurations systems (e.g., your PHP framework of choice), and
+using :ref:`temporary credentials `. However, **be careful to not hard-code your credentials**
+inside of your applications. Hard-coding your credentials can be dangerous, because it is easy to accidentally commit
+your credentials into an SCM repository, potentially exposing your credentials to more people than intended. It can also
+make it difficult to rotate credentials in the future.
+
+.. _configuration_credentials:
+
+Using a configuration file with the service builder
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The SDK provides a service builder that can be used to share configuration values across multiple clients. The service
+builder allows you to specify default configuration values (e.g., credentials and regions) that are used by every
+client. The service builder is configured using either JSON configuration files or PHP scripts that return an array.
+
+The following is an example of a configuration script that returns an array of configuration data that can be used by
+the service builder:
+
+.. code-block:: php
+
+ array('_aws'),
+ 'services' => array(
+ // All AWS clients extend from 'default_settings'. Here we are
+ // overriding 'default_settings' with our default credentials and
+ // providing a default region setting.
+ 'default_settings' => array(
+ 'params' => array(
+ 'key' => 'YOUR_AWS_ACCESS_KEY_ID',
+ 'secret' => 'YOUR_AWS_SECRET_ACCESS_KEY',
+ 'region' => 'us-west-1'
+ )
+ )
+ )
+ );
+
+After creating and saving the configuration file, you need to instantiate a service builder.
+
+.. code-block:: php
+
+ get('s3');
+
+.. _factory_credentials:
+
+Passing credentials into a client factory method
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+A simple way to specify your credentials is by injecting them directly into the factory method when instantiating the
+client object.
+
+.. code-block:: php
+
+ 'YOUR_AWS_ACCESS_KEY_ID',
+ 'secret' => 'YOUR_AWS_SECRET_ACCESS_KEY',
+ ));
+
+In some cases, you may already have an instance of a ``Credentials`` object. You can use this instead of specifying your
+access keys separately.
+
+.. code-block:: php
+
+ $credentials
+ ));
+
+You may also want to read the section in the Getting Started Guide about
+:ref:`using a client's factory method ` for more details.
+
+.. _set_credentials:
+
+Setting credentials after instantiation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+At any time after instantiating the client, you can set the credentials the client should use with the
+``setCredentials()`` method.
+
+.. code-block:: php
+
+ setCredentials($credentials);
+
+This can be used to change the credentials, set temporary credentials, refresh expired credentials, etc.
+
+Using the ``setCredentials()`` method will also trigger a ``client.credentials_changed`` event, so you can program other
+parts of your application to react to the change. To do this, you just need to add a listener to the client object.
+
+.. code-block:: php
+
+ use Aws\S3\S3Client;
+ use Aws\Common\Credentials\Credentials
+
+ // Create 2 sets of credentials
+ $credentials1 = new Credentials('ACCESS_KEY_1', 'SECRET_KEY_1');
+ $credentials2 = new Credentials('ACCESS_KEY_2', 'SECRET_KEY_2');
+
+ // Instantiate the client with the first credential set
+ $s3Client = S3Client::factory(array('credentials' => $credentials1));
+
+ // Get the event dispatcher and register a listener for the credential change
+ $dispatcher = $s3Client->getEventDispatcher();
+ $dispatcher->addListener('client.credentials_changed', function ($event) {
+ $formerAccessKey = $event['former_credentials']->getAccessKey();
+ $currentAccessKey = $event['credentials']->getAccessKey();
+ echo "Access key has changed from {$formerAccessKey} to {$currentAccessKey}.\n";
+ });
+
+ // Change the credentials to the second set to trigger the event
+ $s3Client->setCredentials($credentials2);
+ //> Access key has changed from ACCESS_KEY_1 to ACCESS_KEY_2.
+
+.. _temporary_credentials:
+
+Using temporary credentials from AWS STS
+----------------------------------------
+
+`AWS Security Token Service `_ (AWS STS) enables you to
+request limited-privilege, **temporary credentials** for AWS IAM users or for users that you authenticate via identity
+federation. One common use case for using temporary credentials is to grant mobile or client-side applications access to
+AWS resources by authenticating users through third-party identity providers (read more about `Web Identity Federation
+`_).
+
+.. note::
+
+ Temporary credentials generated by AWS STS are not supported by every service. Please check if the service you are
+ using supports temporary credentials by reading `AWS Services that Support AWS STS
+ `_.
+
+Getting temporary credentials
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+AWS STS has several operations that return temporary credentials, but the ``GetSessionToken`` operation is the simplest
+for demonstration purposes. Assuming you have an instance of ``Aws\Sts\StsClient`` stored in the ``$stsClient``
+variable, this is how you call it:
+
+.. code-block:: php
+
+ $result = $stsClient->getSessionToken();
+
+The result for ``GetSessionToken`` and the other AWS STS operations always contains a ``'Credentials'`` value. If you
+print the result (e.g., ``print_r($result)``), it looks like the following:
+
+::
+
+ Array
+ (
+ ...
+ [Credentials] => Array
+ (
+ [SessionToken] => ''
+ [SecretAccessKey] => ''
+ [Expiration] => 2013-11-01T01:57:52Z
+ [AccessKeyId] => ''
+ )
+ ...
+ )
+
+Providing temporary credentials to the SDK
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+You can use temporary credentials with another AWS client by instantiating the client and passing in the values received
+from AWS STS directly.
+
+.. code-block:: php
+
+ use Aws\S3\S3Client;
+
+ $result = $stsClient->getSessionToken();
+
+ $s3Client = S3Client::factory(array(
+ 'key' => $result['Credentials']['AccessKeyId'],
+ 'secret' => $result['Credentials']['SecretAccessKey'],
+ 'token' => $result['Credentials']['SessionToken'],
+ ));
+
+You can also construct a ``Credentials`` object and use that when instantiating the client.
+
+.. code-block:: php
+
+ use Aws\Common\Credentials\Credentials;
+ use Aws\S3\S3Client;
+
+ $result = $stsClient->getSessionToken();
+
+ $credentials = new Credentials(
+ $result['Credentials']['AccessKeyId'],
+ $result['Credentials']['SecretAccessKey'],
+ $result['Credentials']['SessionToken']
+ );
+
+ $s3Client = S3Client::factory(array('credentials' => $credentials));
+
+However, the *best* way to provide temporary credentials is to use the ``createCredentials()`` helper method included
+with the ``StsClient``. This method extracts the data from an AWS STS result and creates the ``Credentials`` object for
+you.
+
+.. code-block:: php
+
+ $result = $stsClient->getSessionToken();
+ $credentials = $stsClient->createCredentials($result);
+
+ $s3Client = S3Client::factory(array('credentials' => $credentials));
+
+You can also use the same technique when setting credentials on an existing client object.
+
+.. code-block:: php
+
+ $credentials = $stsClient->createCredentials($stsClient->getSessionToken());
+ $s3Client->setCredentials($credentials);
+
+For more information about why you might need to use temporary credentials in your application or project, see
+`Scenarios for Granting Temporary Access `_ in the AWS
+STS documentation.
diff --git a/vendor/aws/aws-sdk-php/docs/faq.rst b/vendor/aws/aws-sdk-php/docs/faq.rst
new file mode 100644
index 0000000..65566bc
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/faq.rst
@@ -0,0 +1,220 @@
+================================
+Frequently Asked Questions (FAQ)
+================================
+
+What methods are available on a client?
+---------------------------------------
+
+The AWS SDK for PHP utilizes service descriptions and dynamic
+`magic __call() methods `_ to execute API
+operations. Every magic method supported by a client is documented in the docblock of a client class using ``@method``
+annotations. Several PHP IDEs, including `PHPStorm `_ and
+`Zend Studio `_, are able to autocomplete based on ``@method`` annotations.
+You can find a full list of methods available for a web service client in the
+`API documentation `_ of the client or in the
+`user guide `_ for that client.
+
+For example, the Amazon S3 client supports the following operations: :ref:`S3_operations`
+
+What do I do about a cURL SSL certificate error?
+------------------------------------------------
+
+This issue can occur when using an out of date CA bundle with cURL and SSL. You
+can get around this issue by updating the CA bundle on your server or downloading
+a more up to date CA bundle from the `cURL website directly `_.
+
+Simply download a more up to date CA bundle somewhere on your system and instruct the
+SDK to use that CA bundle rather than the default. You can configure the SDK to
+use a more up to date CA bundle by specifying the ``ssl.certificate_authority``
+in a client's factory method or the configuration settings used with
+``Aws\Common\Aws``.
+
+.. code-block:: php
+
+ $aws = Aws\Common\Aws::factory(array(
+ 'region' => 'us-west-2',
+ 'key' => '****',
+ 'secret' => '****',
+ 'ssl.certificate_authority' => '/path/to/updated/cacert.pem'
+ ));
+
+You can find out more about how cURL bundles the CA bundle here: http://curl.haxx.se/docs/caextract.html
+
+How do I disable SSL?
+---------------------
+
+.. warning::
+
+ Because SSL requires all data to be encrypted and requires more TCP packets to complete a connection handshake than
+ just TCP, disabling SSL may provide a small performance improvement. However, with SSL disabled, all data is sent
+ over the wire unencrypted. Before disabling SSL, you must carefully consider the security implications and the
+ potential for eavesdropping over the network.
+
+You can disable SSL by setting the ``scheme`` parameter in a client factory method to 'http'.
+
+.. code-block:: php
+
+ $client = Aws\DynamoDb\DynamoDbClient::factory(array(
+ 'region' => 'us-west-2',
+ 'scheme' => 'http'
+ ));
+
+How can I make the SDK faster?
+------------------------------
+
+See :doc:`performance` for more information.
+
+Why can't I upload or download files greater than 2GB?
+------------------------------------------------------
+
+Because PHP's integer type is signed and many platforms use 32-bit integers, the
+AWS SDK for PHP does not correctly handle files larger than 2GB on a 32-bit stack
+(where "stack" includes CPU, OS, web server, and PHP binary). This is a
+`well-known PHP issue `_. In the
+case of Microsoft® Windows®, there are no official builds of PHP that support
+64-bit integers.
+
+The recommended solution is to use a `64-bit Linux stack `_,
+such as the 64-bit Amazon Linux AMI with the latest version of PHP installed.
+
+For more information, please see: `PHP filesize :Return values `_.
+
+How can I see what data is sent over the wire?
+----------------------------------------------
+
+You can attach a ``Guzzle\Plugin\Log\LogPlugin`` to any client to see all request and
+response data sent over the wire. The LogPlugin works with any logger that implements
+the ``Guzzle\Log\LogAdapterInterface`` interface (currently Monolog, ZF1, ZF2).
+
+If you just want to quickly see what data is being sent over the wire, you can
+simply attach a debug log plugin to your client.
+
+.. code-block:: php
+
+ use Guzzle\Plugin\Log\LogPlugin;
+
+ // Create an Amazon S3 client
+ $s3Client = S3Client::factory();
+
+ // Add a debug log plugin
+ $s3Client->addSubscriber(LogPlugin::getDebugPlugin());
+
+For more complex logging or logging to a file, you can build a LogPlugin manually.
+
+.. code-block:: php
+
+ use Guzzle\Common\Log\MonologLogAdapter;
+ use Guzzle\Plugin\Log\LogPlugin;
+ use Monolog\Logger;
+ use Monolog\Handler\StreamHandler;
+
+ // Create a log channel
+ $log = new Logger('aws');
+ $log->pushHandler(new StreamHandler('/path/to/your.log', Logger::WARNING));
+
+ // Create a log adapter for Monolog
+ $logger = new MonologLogAdapter($log);
+
+ // Create the LogPlugin
+ $logPlugin = new LogPlugin($logger);
+
+ // Create an Amazon S3 client
+ $s3Client = S3Client::factory();
+
+ // Add the LogPlugin to the client
+ $s3Client->addSubscriber($logPlugin);
+
+You can find out more about the LogPlugin on the Guzzle website: http://guzzlephp.org/guide/plugins.html#log-plugin
+
+How can I set arbitrary headers on a request?
+---------------------------------------------
+
+You can add any arbitrary headers to a service operation by setting the ``command.headers`` value. The following example
+shows how to add an ``X-Foo-Baz`` header to an Amazon S3 PutObject operation.
+
+.. code-block:: php
+
+ $s3Client = S3Client::factory();
+ $s3Client->putObject(array(
+ 'Key' => 'test',
+ 'Bucket' => 'mybucket',
+ 'command.headers' => array(
+ 'X-Foo-Baz' => 'Bar'
+ )
+ ));
+
+Does the SDK follow semantic versioning?
+----------------------------------------
+
+Yes. The SDK follows a semantic versioning scheme similar to – but not the same as – `semver `_.
+Instead of the **MAJOR.MINOR.PATCH** scheme specified by semver, the SDK actually follows a scheme that looks like
+**PARADIGM.MAJOR.MINOR** where:
+
+1. The **PARADIGM** version number is incremented when **drastic, breaking changes** are made to the SDK, such that the
+ fundamental way of using the SDK is different. You are probably aware that version 1.x and version 2.x of the AWS SDK
+ for PHP are *very* different.
+2. The **MAJOR** version number is incremented when **breaking changes** are made to the API. These are usually small
+ changes, and only occur when one of the services makes breaking changes changes to their API. Make sure to check the
+ `CHANGELOG `_ and
+ `UPGRADING `_ documents when these changes occur.
+3. The **MINOR** version number is incremented when any **backwards-compatible** change is made, whether it's a new
+ feature or a bug fix.
+
+The best way to ensure that you are not affected by breaking changes is to set your dependency on the SDK in Composer to
+stay within a particular **PARADIGM.MAJOR** version. This can be done using the wildcard syntax:
+
+.. code-block:: json
+
+ {
+ "require": {
+ "aws/aws-sdk-php": "2.4.*"
+ }
+ }
+
+...Or by using the the tilde operator. The following statement is equivalent to `>=2.4.9,<2.5`:
+
+.. code-block:: json
+
+ {
+ "require": {
+ "aws/aws-sdk-php": "~2.4.9"
+ }
+ }
+
+See the `Composer documentation `_ for more information
+on configuring your dependencies.
+
+The SDK may at some point adopt the semver standard, but this will probably not happen until the next paradigm-type
+change.
+
+Why am I seeing a "Cannot redeclare class" error?
+-------------------------------------------------
+
+We have observed this error a few times when using the ``aws.phar`` from the CLI with APC enabled. This is due to some
+kind of issue with phars and APC. Luckily there are a few ways to get around this. Please choose the one that makes the
+most sense for your environment and application.
+
+1. **Disable APC for CLI** - Change the ``apc.enable_cli`` INI setting to ``Off``.
+2. **Tell APC not to cache phars** - Change the ``apc.filters`` INI setting to include ``"^phar://"``.
+3. **Don't use APC** - PHP 5.5, for example, comes with Zend OpCache built in. This problem has not been observed with
+ Zend OpCache.
+4. **Don't use the phar** - You can install the SDK through Composer (recommended) or by using the zip file.
+
+What is an InstanceProfileCredentialsException?
+-----------------------------------------------
+
+If you are seeing an ``Aws\Common\Exception\InstanceProfileCredentialsException`` while using the SDK, this means that
+the SDK was not provided with any credentials.
+
+If you instantiate a client *without* credentials, on the first time that you perform a service operation, the SDK will
+attempt to find credentials. It first checks in some specific environment variables, then it looks for instance profile
+credentials, which are only available on configured Amazon EC2 instances. If absolutely no credentials are provided or
+found, an ``Aws\Common\Exception\InstanceProfileCredentialsException`` is thrown.
+
+If you are seeing this error and you are intending to use instance profile credentials, then you need to make sure that
+the Amazon EC2 instance that the SDK is running on is configured with an appropriate IAM role.
+
+If you are seeing this error and you are **not** intending to use instance profile credentials, then you need to make
+sure that you are properly providing credentials to the SDK.
+
+For more information, see :doc:`credentials`.
diff --git a/vendor/aws/aws-sdk-php/docs/feature-commands.rst b/vendor/aws/aws-sdk-php/docs/feature-commands.rst
new file mode 100644
index 0000000..2c54449
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/feature-commands.rst
@@ -0,0 +1,204 @@
+===============
+Command Objects
+===============
+
+Command objects are fundamental to how the SDK works. In normal usage of the SDK, you may never interact with command
+objects. However, if you are :ref:`performing operations in parallel `,
+:ref:`inspecting data from the request or response `, or writing custom plugins, you will need
+to understand how they work.
+
+Typical SDK usage
+-----------------
+
+.. include:: _snippets/performing-operations.txt
+
+A peek under the hood
+---------------------
+
+If you examine a client class, you will see that the methods corresponding to the operations do not actually exist. They
+are implemented using the ``__call()`` magic method behavior. These pseudo-methods are actually shortcuts that
+encapsulate the SDK's — and the underlying Guzzle library's — use of command objects.
+
+For example, you could perform the same ``DescribeTable`` operation from the preceding section using command objects:
+
+.. code-block:: php
+
+ $command = $dynamoDbClient->getCommand('DescribeTable', array(
+ 'TableName' => 'YourTableName',
+ ));
+ $result = $command->getResult();
+
+A **Command** is an object that represents the execution of a service operation. Command objects are an abstraction of
+the process of formatting a request to a service, executing the request, receiving the response, and formatting the
+results. Commands are created and executed by the client and contain references to **Request** and **Response** objects.
+The **Result** object is a what we refer to as a :doc:`"modeled response" `.
+
+Using command objects
+---------------------
+
+Using the pseudo-methods for performing operations is shorter and preferred for typical use cases, but command objects
+provide greater flexibility and access to additional data.
+
+Manipulating command objects before execution
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+When you create a command using a client's ``getCommand()`` method, it does not immediately execute. Because commands
+are lazily executed, it is possible to pass the command object around and add or modify the parameters. The following
+examples show how to work with command objects:
+
+.. code-block:: php
+
+ // You can add parameters after instantiation
+ $command = $s3Client->getCommand('ListObjects');
+ $command->set('MaxKeys', 50);
+ $command->set('Prefix', 'foo/baz/');
+ $result = $command->getResult();
+
+ // You can also modify parameters
+ $command = $s3Client->getCommand('ListObjects', array(
+ 'MaxKeys' => 50,
+ 'Prefix' => 'foo/baz/',
+ ));
+ $command->set('MaxKeys', 100);
+ $result = $command->getResult();
+
+ // The set method is chainable
+ $result = $s3Client->getCommand('ListObjects')
+ ->set('MaxKeys', 50);
+ ->set('Prefix', 'foo/baz/');
+ ->getResult();
+
+ // You can also use array access
+ $command = $s3Client->getCommand('ListObjects');
+ $command['MaxKeys'] = 50;
+ $command['Prefix'] = 'foo/baz/';
+ $result = $command->getResult();
+
+Also, see the `API docs for commands
+`_.
+
+.. _requests_and_responses:
+
+Request and response objects
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+From the command object, you can access the request, response, and result objects. The availability of these objects
+depend on the state of the command object.
+
+Managing command state
+^^^^^^^^^^^^^^^^^^^^^^
+
+Commands must be prepared before the request object is available, and commands must executed before the response and
+result objects are available.
+
+.. code-block:: php
+
+ // 1. Create
+ $command = $client->getCommand('OperationName');
+
+ // 2. Prepare
+ $command->prepare();
+ $request = $command->getRequest();
+ // Note: `prepare()` also returns the request object
+
+ // 3. Execute
+ $command->execute();
+ $response = $command->getResponse();
+ $result = $command->getResult();
+ // Note: `execute()` also returns the result object
+
+This is nice, because it gives you a chance to modify the request before it is actually sent.
+
+.. code-block:: php
+
+ $command = $client->getCommand('OperationName');
+ $request = $command->prepare();
+ $request->addHeader('foo', 'bar');
+ $result = $command->execute();
+
+You don't have to manage each aspect of the state though, calling ``execute()`` will also prepare the command, and
+calling ``getResult()`` will prepare and execute the command.
+
+Using requests and responses
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Request and response objects contain data about the actual requests and responses to the service.
+
+.. code-block:: php
+
+ $command = $client->getCommand('OperationName');
+ $command->execute();
+
+ // Get and use the request object
+ $request = $command->getRequest();
+ $contentLength = $request->getHeader('Content-Length');
+ $url = $request->getUrl();
+
+ // Get and use the response object
+ $response = $command->getResponse();
+ $success = $response->isSuccessful();
+ $status = $response->getStatusCode();
+
+You can also take advantage of the ``__toString`` behavior of the request and response objects. If you print them
+(e.g., ``echo $request;``), you can see the raw request and response data that was sent over the wire.
+
+To learn more, read the API docs for the `Request
+`_ and `Response
+`_ classes.
+
+.. _parallel_commands:
+
+Executing commands in parallel
+------------------------------
+
+The AWS SDK for PHP allows you to execute multiple operations in parallel when you use command objects. This can reduce
+the total time (sometimes drastically) it takes to perform a set of operations, since you can do them at the same time
+instead of one after another. The following shows an example of how you could upload two files to Amazon S3 at the same
+time.
+
+.. code-block:: php
+
+ $commands = array();
+ $commands[] = $s3Client->getCommand('PutObject', array(
+ 'Bucket' => 'SOME_BUCKET',
+ 'Key' => 'photos/photo01.jpg',
+ 'Body' => fopen('/tmp/photo01.jpg', 'r'),
+ ));
+ $commands[] = $s3Client->getCommand('PutObject', array(
+ 'Bucket' => 'SOME_BUCKET',
+ 'Key' => 'photos/photo02.jpg',
+ 'Body' => fopen('/tmp/photo02.jpg', 'r'),
+ ));
+
+ // Execute an array of command objects to do them in parallel
+ $s3Client->execute($commands);
+
+ // Loop over the commands, which have now all been executed
+ foreach ($commands as $command) {
+ $result = $command->getResult();
+ // Do something with result
+ }
+
+Error handling with parallel commands
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+When executing commands in parallel, error handling becomes a bit trickier. If an exception is thrown, then the SDK (via
+Guzzle) will aggregate the exceptions together and throw a single ``Guzzle\Service\Exception\CommandTransferException``
+(`see the API docs
+`_) once all
+of the commands have completed execution. This exception class keeps track of which commands succeeded and which failed
+and also allows you to fetch the original exceptions thrown for failed commands.
+
+.. code-block:: php
+
+ use Guzzle\Service\Exception\CommandTransferException;
+
+ try {
+ $succeeded = $client->execute($commands);
+ } catch (CommandTransferException $e) {
+ $succeeded = $e->getSuccessfulCommands();
+ echo "Failed Commands:\n";
+ foreach ($e->getFailedCommands() as $failedCommand) {
+ echo $e->getExceptionForFailedCommand($failedCommand)->getMessage() . "\n";
+ }
+ }
diff --git a/vendor/aws/aws-sdk-php/docs/feature-dynamodb-session-handler.rst b/vendor/aws/aws-sdk-php/docs/feature-dynamodb-session-handler.rst
new file mode 100644
index 0000000..36da691
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/feature-dynamodb-session-handler.rst
@@ -0,0 +1,292 @@
+========================
+DynamoDB Session Handler
+========================
+
+Introduction
+------------
+
+The **DynamoDB Session Handler** is a custom session handler for PHP that allows developers to use Amazon DynamoDB as a
+session store. Using DynamoDB for session storage alleviates issues that occur with session handling in a distributed
+web application by moving sessions off of the local file system and into a shared location. DynamoDB is fast, scalable,
+easy to setup, and handles replication of your data automatically.
+
+The DynamoDB Session Handler uses the ``session_set_save_handler()`` function to hook DynamoDB operations into PHP's
+`native session functions `_ to allow for a true drop in replacement. This
+includes support for features like session locking and garbage collection which are a part of PHP's default session
+handler.
+
+For more information on the Amazon DynamoDB service, please visit the `Amazon DynamoDB homepage
+`_.
+
+Basic Usage
+-----------
+
+1. Register the handler
+~~~~~~~~~~~~~~~~~~~~~~~
+
+The first step is to instantiate the Amazon DynamoDB client and register the session handler.
+
+.. code-block:: php
+
+ require 'vendor/autoload.php';
+
+ use Aws\DynamoDb\DynamoDbClient;
+
+ $dynamoDb = DynamoDbClient::factory(array(
+ 'key' => '',
+ 'secret' => '',
+ 'region' => ''
+ ));
+
+ $sessionHandler = $dynamoDb->registerSessionHandler(array(
+ 'table_name' => 'sessions'
+ ));
+
+You can also instantiate the ``SessionHandler`` object directly using it's ``factory`` method.
+
+.. code-block:: php
+
+ require 'vendor/autoload.php';
+
+ use Aws\DynamoDb\DynamoDbClient;
+ use Aws\DynamoDb\Session\SessionHandler;
+
+ $dynamoDb = DynamoDbClient::factory(array(
+ 'key' => '',
+ 'secret' => '',
+ 'region' => '',
+ ));
+
+ $sessionHandler = SessionHandler::factory(array(
+ 'dynamodb_client' => $dynamoDb,
+ 'table_name' => 'sessions',
+ ));
+ $sessionHandler->register();
+
+2. Create a table for storing your sessions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Before you can actually use the session handler, you need to create a table in which to store the sessions. This can be
+done ahead of time through the `AWS Console for Amazon DynamoDB `_, or you
+can use the session handler object (which you've already configured with the table name) by doing the following:
+
+.. code-block:: php
+
+ $sessionHandler->createSessionsTable(5, 5);
+
+The two parameters for this function are used to specify the read and write provisioned throughput for the table,
+respectively.
+
+.. note::
+
+ The ``createSessionsTable`` function uses the ``TableExists`` :doc:`waiter ` internally, so this
+ function call will block until the table exists and is ready to be used.
+
+3. Use PHP sessions like normal
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Once the session handler is registered and the table exists, you can write to and read from the session using the
+``$_SESSION`` superglobal, just like you normally do with PHP's default session handler. The DynamoDB Session Handler
+encapsulates and abstracts the interactions with Amazon DynamoDB and enables you to simply use PHP's native session
+functions and interface.
+
+.. code-block:: php
+
+ // Start the session
+ session_start();
+
+ // Alter the session data
+ $_SESSION['user.name'] = 'jeremy';
+ $_SESSION['user.role'] = 'admin';
+
+ // Close the session (optional, but recommended)
+ session_write_close();
+
+Configuration
+-------------
+
+You may configure the behavior of the session handler using the following options. All options are optional, but you
+should make sure to understand what the defaults are.
+
+============================ ===========================================================================================
+``table_name`` The name of the DynamoDB table in which to store the sessions. This defaults to ``sessions``.
+---------------------------- -------------------------------------------------------------------------------------------
+``hash_key`` The name of the hash key in the DynamoDB sessions table. This defaults to ``id``.
+---------------------------- -------------------------------------------------------------------------------------------
+``session_lifetime`` The lifetime of an inactive session before it should be garbage collected. If it is not
+ provided, then the actual lifetime value that will be used is
+ ``ini_get('session.gc_maxlifetime')``.
+---------------------------- -------------------------------------------------------------------------------------------
+``consistent_read`` Whether or not the session handler should use consistent reads for the ``GetItem``
+ operation. This defaults to ``true``.
+---------------------------- -------------------------------------------------------------------------------------------
+``locking_strategy`` The strategy used for doing session locking. By default the handler uses the
+ ``NullLockingStrategy``, which means that session locking is **not** enabled (see the
+ :ref:`ddbsh-session-locking` section for more information). Valid values for this option
+ include null, 'null', 'pessemistic', or an instance of ``NullLockingStrategy`` or
+ ``PessimisticLockingStrategy``.
+---------------------------- -------------------------------------------------------------------------------------------
+``automatic_gc`` Whether or not to use PHP's session auto garbage collection. This defaults to the value of
+ ``(bool) ini_get('session.gc_probability')``, but the recommended value is ``false``. (see
+ the :ref:`ddbsh-garbage-collection` section for more information).
+---------------------------- -------------------------------------------------------------------------------------------
+``gc_batch_size`` The batch size used for removing expired sessions during garbage collection. This defaults
+ to ``25``, which is the maximum size of a single ``BatchWriteItem`` operation. This value
+ should also take your provisioned throughput into account as well as the timing of your
+ garbage collection.
+---------------------------- -------------------------------------------------------------------------------------------
+``gc_operation_delay`` The delay (in seconds) between service operations performed during garbage collection. This
+ defaults to ``0``. Increasing this value allows you to throttle your own requests in an
+ attempt to stay within your provisioned throughput capacity during garbage collection.
+---------------------------- -------------------------------------------------------------------------------------------
+``max_lock_wait_time`` Maximum time (in seconds) that the session handler should wait to acquire a lock before
+ giving up. This defaults to ``10`` and is only used with the ``PessimisticLockingStrategy``.
+---------------------------- -------------------------------------------------------------------------------------------
+``min_lock_retry_microtime`` Minimum time (in microseconds) that the session handler should wait between attempts
+ to acquire a lock. This defaults to ``10000`` and is only used with the
+ ``PessimisticLockingStrategy``.
+---------------------------- -------------------------------------------------------------------------------------------
+``max_lock_retry_microtime`` Maximum time (in microseconds) that the session handler should wait between attempts
+ to acquire a lock. This defaults to ``50000`` and is only used with the
+ ``PessimisticLockingStrategy``.
+---------------------------- -------------------------------------------------------------------------------------------
+``dynamodb_client`` The ``DynamoDbClient`` object that should be used for performing DynamoDB operations. If
+ you register the session handler from a client object using the ``registerSessionHandler()``
+ method, this will default to the client you are registering it from. If using the
+ ``SessionHandler::factory()`` method, you are required to provide an instance of
+ ``DynamoDbClient``.
+============================ ===========================================================================================
+
+To configure the Session Handler, you must specify the configuration options when you instantiate the handler. The
+following code is an example with all of the configuration options specified.
+
+.. code-block:: php
+
+ $sessionHandler = $dynamoDb->registerSessionHandler(array(
+ 'table_name' => 'sessions',
+ 'hash_key' => 'id',
+ 'session_lifetime' => 3600,
+ 'consistent_read' => true,
+ 'locking_strategy' => null,
+ 'automatic_gc' => 0,
+ 'gc_batch_size' => 50,
+ 'max_lock_wait_time' => 15,
+ 'min_lock_retry_microtime' => 5000,
+ 'max_lock_retry_microtime' => 50000,
+ ));
+
+Pricing
+-------
+
+Aside from data storage and data transfer fees, the costs associated with using Amazon DynamoDB are calculated based on
+the provisioned throughput capacity of your table (see the `Amazon DynamoDB pricing details
+`_). Throughput is measured in units of Write Capacity and Read Capacity. The
+Amazon DynamoDB homepage says:
+
+ A unit of Write Capacity enables you to perform one write per second for items of up to 1KB in size. Similarly, a
+ unit of Read Capacity enables you to perform one strongly consistent read per second (or two eventually consistent
+ reads per second) of items of up to 1KB in size. Larger items will require more capacity. You can calculate the
+ number of units of read and write capacity you need by estimating the number of reads or writes you need to do per
+ second and multiplying by the size of your items (rounded up to the nearest KB).
+
+Ultimately, the throughput and the costs required for your sessions table is going to correlate with your expected
+traffic and session size. The following table explains the amount of read and write operations that are performed on
+your DynamoDB table for each of the session functions.
+
++----------------------------------------+-----------------------------------------------------------------------------+
+| Read via ``session_start()`` | * 1 read operation (only 0.5 if ``consistent_read`` is ``false``). |
+| (Using ``NullLockingStrategy``) | * (Conditional) 1 write operation to delete the session if it is expired. |
++----------------------------------------+-----------------------------------------------------------------------------+
+| Read via ``session_start()`` | * A minimum of 1 *write* operation. |
+| (Using ``PessimisticLockingStrategy``) | * (Conditional) Additional write operations for each attempt at acquiring a |
+| | lock on the session. Based on configured lock wait time and retry options.|
+| | * (Conditional) 1 write operation to delete the session if it is expired. |
++----------------------------------------+-----------------------------------------------------------------------------+
+| Write via ``session_write_close()`` | * 1 write operation. |
++----------------------------------------+-----------------------------------------------------------------------------+
+| Delete via ``session_destroy()`` | * 1 write operation. |
++----------------------------------------+-----------------------------------------------------------------------------+
+| Garbage Collection | * 0.5 read operations **per KB of data in the table** to scan for expired |
+| | sessions. |
+| | * 1 write operation **per expired item** to delete it. |
++----------------------------------------+-----------------------------------------------------------------------------+
+
+.. _ddbsh-session-locking:
+
+Session Locking
+---------------
+
+The DynamoDB Session Handler supports pessimistic session locking in order to mimic the behavior of PHP's default
+session handler. By default the DynamoDB Session Handler has this feature *turned off* since it can become a performance
+bottleneck and drive up costs, especially when an application accesses the session when using ajax requests or iframes.
+You should carefully consider whether or not your application requires session locking or not before enabling it.
+
+By default the session handler uses the ``NullLockingStrategy`` which does not do any session locking. To enable session
+locking, you should use the ``PessimisticLockingStrategy``, which can be specified when the session handler is created.
+
+.. code-block:: php
+
+ $sessionHandler = $dynamoDb->registerSessionHandler(array(
+ 'table_name' => 'sessions',
+ 'locking_strategy' => 'pessimistic',
+ ));
+
+.. _ddbsh-garbage-collection:
+
+Garbage Collection
+------------------
+
+The DynamoDB Session Handler supports session garbage collection by using a series of ``Scan`` and ``BatchWriteItem``
+operations. Due to the nature of how the ``Scan`` operation works and in order to find all of the expired sessions and
+delete them, the garbage collection process can require a lot of provisioned throughput.
+
+For this reason it is discouraged to rely on the PHP's normal session garbage collection triggers (i.e., the
+``session.gc_probability`` and ``session.gc_divisor`` ini settings). A better practice is to set
+``session.gc_probability`` to ``0`` and schedule the garbage collection to occur during an off-peak time where a
+burst of consumed throughput will not disrupt the rest of the application.
+
+For example, you could have a nightly cron job trigger a script to run the garbage collection. This script might look
+something like the following:
+
+.. code-block:: php
+
+ require 'vendor/autoload.php';
+
+ use Aws\DynamoDb\DynamoDbClient;
+ use Aws\DynamoDb\Session\SessionHandler;
+
+ $dynamoDb = DynamoDbClient::factory(array(
+ 'key' => '',
+ 'secret' => '',
+ 'region' => '',
+ ));
+
+ $sessionHandler = SessionHandler::factory(array(
+ 'dynamodb_client' => $dynamoDb,
+ 'table_name' => 'sessions',
+ ));
+
+ $sessionHandler->garbageCollect();
+
+You can also use the ``gc_operation_delay`` configuration option on the session handler to introduce delays in between
+the ``Scan`` and ``BatchWriteItem`` operations that are performed by the garbage collection process. This will increase
+the amount of time it takes the garbage collection to complete, but it can help you spread out the requests made by the
+session handler in order to help you stay close to or within your provisioned throughput capacity during garbage
+collection.
+
+Best Practices
+--------------
+
+#. Create your sessions table in a region that is geographically closest to or in the same region as your application
+ servers. This will ensure the lowest latency between your application and DynamoDB database.
+#. Choose the provisioned throughput capacity of your sessions table carefully, taking into account the expected traffic
+ to your application and the expected size of your sessions.
+#. Monitor your consumed throughput through the AWS Management Console or with Amazon CloudWatch and adjust your
+ throughput settings as needed to meet the demands of your application.
+#. Keep the size of your sessions small. Sessions that are less than 1KB will perform better and require less
+ provisioned throughput capacity.
+#. Do not use session locking unless your application requires it.
+#. Instead of using PHP's built-in session garbage collection triggers, schedule your garbage collection via a cron job,
+ or another scheduling mechanism, to run during off-peak hours. Use the ``gc_operation_delay`` option to add delays
+ in between the requests performed for the garbage collection process.
+
diff --git a/vendor/aws/aws-sdk-php/docs/feature-facades.rst b/vendor/aws/aws-sdk-php/docs/feature-facades.rst
new file mode 100644
index 0000000..dcb0c5f
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/feature-facades.rst
@@ -0,0 +1,170 @@
+=====================
+Static Client Facades
+=====================
+
+Introduction
+------------
+
+Version 2.4 of the AWS SDK for PHP adds the ability to enable and use static client "facades". These facades provide an
+easy, static interface to service clients available in the service builder. For example, when working with a normal
+client instance, you might have code that looks like the following:
+
+.. code-block:: php
+
+ // Get the configured S3 client from the service builder
+ $s3 = $aws->get('s3');
+
+ // Execute the CreateBucket command using the S3 client
+ $s3->createBucket(array('Bucket' => 'your_new_bucket_name'));
+
+With client facades enabled, this can also be accomplished with the following code:
+
+.. code-block:: php
+
+ // Execute the CreateBucket command using the S3 client
+ S3::createBucket(array('Bucket' => 'your_new_bucket_name'));
+
+Why Use Client Facades?
+-----------------------
+
+The use of static client facades is completely optional. We have included this feature in the SDK in order to appeal to
+PHP developers who prefer static notation or who are familiar with PHP frameworks like Code Ignitor, Laravel, or Kohana
+where this style of method invocation is common.
+
+Though using static client facades has little real benefit over using client instances, it can make your code more
+concise and prevent your from having to inject the service builder or client instance into the context of where you
+need the client object. This can make your code easier to write and understand. Whether or not you should use the client
+facades is purely a matter of preference.
+
+The way in which client facades work in the AWS SDK for PHP is similar to how `facades work in the Laravel 4
+Framework `_. Even though you are calling static classes, all of the method calls are
+proxied to method calls on actual client instances — the ones stored in the service builder. This means that the usage
+of the clients via the client facades can still be mocked in your unit tests, which removes one of the general
+disadvantages to using static classes in object-oriented programming. For information about how to test code that uses
+client facades, please see the **Testing Code that Uses Client Facades**
+below.
+
+Enabling and Using Client Facades
+---------------------------------
+
+To enable static client facades to be used in your application, you must use the ``Aws\Common\Aws::enableFacades``
+method when you setup the service builder.
+
+.. code-block:: php
+
+ // Include the Composer autoloader
+ require 'vendor/autoload.php';
+
+ // Instantiate the SDK service builder with my config and enable facades
+ $aws = Aws::factory('/path/to/my_config.php')->enableFacades();
+
+This will setup the client facades and alias them into the global namespace. After that, you can use them anywhere to
+have more simple and expressive code for interacting with AWS services.
+
+.. code-block:: php
+
+ // List current buckets
+ echo "Current Buckets:\n";
+ foreach (S3::getListBucketsIterator() as $bucket) {
+ echo "{$bucket['Name']}\n";
+ }
+
+ $args = array('Bucket' => 'your_new_bucket_name');
+ $file = '/path/to/the/file/to/upload.jpg';
+
+ // Create a new bucket and wait until it is available for uploads
+ S3::createBucket($args) and S3::waitUntilBucketExists($args);
+ echo "\nCreated a new bucket: {$args['Bucket']}.\n";
+
+ // Upload a file to the new bucket
+ $result = S3::putObject($args + array(
+ 'Key' => basename($file),
+ 'Body' => fopen($file, 'r'),
+ ));
+ echo "\nCreated a new object: {$result['ObjectURL']}\n";
+
+You can also mount the facades into a namespace other than the global namespace. For example, if you wanted to make the
+client facades available in the "Services" namespace, then you could do the following:
+
+.. code-block:: php
+
+ Aws::factory('/path/to/my_config.php')->enableFacades('Services');
+
+ $result = Services\DynamoDb::listTables();
+
+The client facades that are available are determined by what is in your service builder configuration (see
+:doc:`configuration`). If you are extending the SDK's default configuration file or not providing one at all, then all
+of the clients should be accessible from the service builder instance and client facades (once enabled) by default.
+
+Based on the following excerpt from the default configuration file (located at
+``src/Aws/Common/Resources/aws-config.php``):
+
+.. code-block:: php
+
+ 's3' => array(
+ 'alias' => 'S3',
+ 'extends' => 'default_settings',
+ 'class' => 'Aws\S3\S3Client'
+ ),
+
+The ``'class'`` key indicates the client class that the static client facade will proxy to, and the ``'alias'`` key
+indicates what the client facade will be named. Only entries in the service builder config that have both the
+``'alias'`` and ``'class'`` keys specified will be mounted as static client facades. You can potentially update or add
+to your service builder config to alter or create new or custom client facades.
+
+Testing Code that Uses Client Facades
+-------------------------------------
+
+With the static client facades in the SDK, even though you are calling static classes, all of the method calls are
+proxied to method calls on actual client instances — the ones stored in the service builder. This means that they can
+be mocked during tests, which removes one of the general disadvantages to using static classes in object-oriented
+programming.
+
+To mock a client facade for a test, you can explicitly set a mocked client object for the key in the service builder
+that would normally contain the client referenced by the client facade. Here is a complete, but contrived, PHPUnit test
+showing how this is done:
+
+.. code-block:: php
+
+ serviceBuilder = Aws::factory();
+ $this->serviceBuilder->enableFacades();
+ }
+
+ public function testCanDoSomethingWithYourAppsFileBrowserClass()
+ {
+ // Mock the ListBuckets method of S3 client
+ $mockS3Client = $this->getMockBuilder('Aws\S3\S3Client')
+ ->disableOriginalConstructor()
+ ->getMock();
+ $mockS3Client->expects($this->any())
+ ->method('listBuckets')
+ ->will($this->returnValue(new Model(array(
+ 'Buckets' => array(
+ array('Name' => 'foo'),
+ array('Name' => 'bar'),
+ array('Name' => 'baz')
+ )
+ ))));
+ $this->serviceBuilder->set('s3', $mockS3Client);
+
+ // Test the FileBrowser object that uses the S3 client facade internally
+ $fileBrowser = new FileBrowser();
+ $partitions = $fileBrowser->getPartitions();
+ $this->assertEquals(array('foo', 'bar', 'baz'), $partitions);
+ }
+ }
+
+Alternatively, if you are specifically only mocking responses from clients, you might consider using the `Guzzle Mock
+Plugin `_.
diff --git a/vendor/aws/aws-sdk-php/docs/feature-iterators.rst b/vendor/aws/aws-sdk-php/docs/feature-iterators.rst
new file mode 100644
index 0000000..7c08304
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/feature-iterators.rst
@@ -0,0 +1,93 @@
+=========
+Iterators
+=========
+
+Introduction
+------------
+
+.. include:: _snippets/iterators-intro.txt
+
+The ``getIterator()`` method also accepts a command object for the first argument. If you have a command object already
+instantiated, you can create an iterator directly from the command object.
+
+.. code-block:: php
+
+ $command = $client->getCommand('ListObjects', array('Bucket' => 'my-bucket'));
+ $iterator = $client->getIterator($command);
+
+Iterator Objects
+----------------
+
+The actual object returned by ``getIterator()`` is an instance of the ``Aws\Common\Iterator\AwsResourceIterator`` class
+(see the `API docs `_
+for more information about its methods and properties). This class implements PHP's native ``Iterator`` interface, which
+is why it works with ``foreach``, can be used with iterator functions like ``iterator_to_array``, and integrates well
+with `SPL iterators `_ like ``LimitIterator``.
+
+Iterator objects only store one "page" of results at a time and only make as many requests as they need based on the
+current iteration. The S3 ``ListObjects`` operation only returns up to 1000 objects at a time. If your bucket has ~10000
+objects, then the iterator would need to do 10 requests. However, it does not execute the subsequent requests until
+needed. If you are iterating through the results, the first request would happen when you start iterating, and the
+second request would not happen until you iterate to the 1001th object. This can help your application save memory by
+only holding one page of results at a time.
+
+Basic Configuration
+-------------------
+
+Iterators accept an extra set of parameters that are not passed into the commands. You can set a limit on the number of
+results you want with the ``limit`` parameter, and you can control how many results you want to get back per request
+using the ``page_size`` parameter. If no ``limit`` is specified, then all results are retrieved. If no ``page_size`` is
+specified, then the Iterator will use the maximum page size allowed by the operation being executed.
+
+The following example will make 10 Amazon S3 ``ListObjects`` requests (assuming there are more than 1000 objects in the
+specified bucket) that each return up to 100 objects. The ``foreach`` loop will yield up to 999 objects.
+
+.. code-block:: php
+
+ $iterator = $client->getIterator('ListObjects', array(
+ 'Bucket' => 'my-bucket'
+ ), array(
+ 'limit' => 999,
+ 'page_size' => 100
+ ));
+
+ foreach ($iterator as $object) {
+ echo $object['Key'] . "\n";
+ }
+
+There are some limitations to the ``limit`` and ``page_size`` parameters though. Not all operations support specifying
+a page size or limit, so the Iterator will do its best with what you provide. For example, if an operation always
+returns 1000 results, and you specify a limit of 100, the Iterator will only yield 100 results, even though the actual
+request sent to the service yielded 1000.
+
+Iterator Events
+---------------
+
+Iterators emit 2 kinds of events:
+
+1. ``resource_iterator.before_send`` - Emitted right before a request is sent to retrieve results.
+2. ``resource_iterator.after_send`` - Emitted right after a request is sent to retrieve results.
+
+Iterator objects extend the ``Guzzle\Common\AbstractHasDispatcher`` class which exposes the ``addSubscriber()`` method
+and the ``getEventDispatcher()`` method. To attach listeners, you can use the following example which echoes a message
+right before and after a request is executed by the iterator.
+
+.. code-block:: php
+
+ $iterator = $client->getIterator('ListObjects', array(
+ 'Bucket' => 'my-bucket'
+ ));
+
+ // Get the event dispatcher and register listeners for both events
+ $dispatcher = $iterator->getEventDispatcher();
+ $dispatcher->addListener('resource_iterator.before_send', function ($event) {
+ echo "Getting more results…\n";
+ });
+ $dispatcher->addListener('resource_iterator.after_send', function ($event) use ($iterator) {
+ $requestCount = $iterator->getRequestCount();
+ echo "Results received. {$requestCount} request(s) made so far.\n";
+ });
+
+ foreach ($iterator as $object) {
+ echo $object['Key'] . "\n";
+ }
diff --git a/vendor/aws/aws-sdk-php/docs/feature-models.rst b/vendor/aws/aws-sdk-php/docs/feature-models.rst
new file mode 100644
index 0000000..1d9614f
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/feature-models.rst
@@ -0,0 +1,167 @@
+=================
+Modeled Responses
+=================
+
+Introduction
+------------
+
+.. include:: _snippets/models-intro.txt
+
+Working with Model objects
+--------------------------
+
+Model objects (and Command objects) inherit from the `Guzzle Collection class
+`_ and implement PHP's native
+``ArrayAccess``, ``IteratorAggregate``, and ``Countable`` interfaces. This means that they behave like arrays when you
+are accessing keys and iterating over key-value pairs. You can also use the ``toArray()`` method of the Model object to
+get the array form directly.
+
+However, model objects will not throw errors on undefined keys, so it's safe to use values directly without doing
+``isset()`` checks. It the key doesn't exist, then the value will be returned as ``null``.
+
+.. code-block:: php
+
+ // Use an instance of S3Client to get an object
+ $result = $s3Client->getObject(array(
+ 'Bucket' => 'my-bucket',
+ 'Key' => 'test.txt'
+ ));
+
+ // Using a value that may not exist
+ if (!$result['ContentLength']) {
+ echo "Empty file.";
+ }
+
+ $isDeleted = (bool) $result->get('DeleteMarker');
+
+Of course, you can still use ``isset()`` checks if you want to, since ``Model`` does implement ``ArrayAccess``. The
+model object (and underlying Collection object) also has convenience methods for finding and checking for keys and
+values.
+
+.. code-block:: php
+
+ // You can use isset() since the object implements ArrayAccess
+ if (!isset($result['ContentLength'])) {
+ echo "Empty file.";
+ }
+
+ // There is also a method that does the same type of check
+ if (!$result->hasKey('ContentLength')) {
+ echo "Empty file.";
+ }
+
+ // If needed, you can search for a key in a case-insensitive manner
+ echo $result->keySearch('body');
+ //> Body
+ echo $result->keySearch('Body');
+ //> Body
+
+ // You can also list all of the keys in the result
+ var_export($result->getKeys());
+ //> array ( 'Body', 'DeleteMarker', 'Expiration', 'ContentLength', ... )
+
+ // The getAll() method will return the result data as an array
+ // You can specify a set of keys to only get a subset of the data
+ var_export($result->getAll(array('Body', 'ContentLength')));
+ //> array ( 'Body' => 'Hello!' , 'ContentLength' => 6 )
+
+Getting nested values
+~~~~~~~~~~~~~~~~~~~~~
+
+The ``getPath()`` method of the model is useful for easily getting nested values from a response. The path is specified
+as a series of keys separated by slashes.
+
+.. code-block:: php
+
+ // Perform a RunInstances operation and traverse into the results to get the InstanceId
+ $result = $ec2Client->runInstances(array(
+ 'ImageId' => 'ami-548f13d',
+ 'MinCount' => 1,
+ 'MaxCount' => 1,
+ 'InstanceType' => 't1.micro',
+ ));
+ $instanceId = $result->getPath('Instances/0/InstanceId');
+
+Wildcards are also supported so that you can get extract an array of data. The following example is a modification of
+the preceding such that multiple InstanceIds can be retrieved.
+
+.. code-block:: php
+
+ // Perform a RunInstances operation and get an array of the InstanceIds that were created
+ $result = $ec2Client->runInstances(array(
+ 'ImageId' => 'ami-548f13d',
+ 'MinCount' => 3,
+ 'MaxCount' => 5,
+ 'InstanceType' => 't1.micro',
+ ));
+ $instanceId = $result->getPath('Instances/*/InstanceId');
+
+Using data in the model
+-----------------------
+
+Response Models contain the parsed data from the response from a service operation, so the contents of the model will
+be different depending on which operation you've performed.
+
+The SDK's API docs are the best resource for discovering what the model object will contain for a given operation. The
+API docs contain a full specification of the data in the response model under the *Returns* section of the docs for an
+operation (e.g., `S3 GetObject operation `_,
+`EC2 RunInstances operation `_).
+
+From within your code you can convert the response model directly into an array using the ``toArray()`` method. If you
+are doing some debugging in your code, you could use ``toArray()`` in conjunction with ``print_r()`` to print out a
+simple representation of the response data.
+
+.. code-block:: php
+
+ $result = $ec2Client->runInstances(array(/* ... */));
+ print_r($result->toArray());
+
+You can also examine the service description for a service, which is located in the ``Resources`` directory within a
+given client's namespace directory. For example, here is a snippet from the SQS service description (located in
+``src/Aws/Sqs/Resources/``) that shows the schema for the response of the ``SendMessage`` operation.
+
+.. code-block:: php
+
+ // ...
+ 'SendMessageResult' => array(
+ 'type' => 'object',
+ 'additionalProperties' => true,
+ 'properties' => array(
+ 'MD5OfMessageBody' => array(
+ 'description' => 'An MD5 digest of the non-URL-encoded message body string. This can be used [...]',
+ 'type' => 'string',
+ 'location' => 'xml',
+ ),
+ 'MessageId' => array(
+ 'description' => 'The message ID of the message added to the queue.',
+ 'type' => 'string',
+ 'location' => 'xml',
+ ),
+ ),
+ ),
+ // ...
+
+Getting Response Headers
+------------------------
+
+The ``Response`` object is not directly accessible from the ``Model`` object. If you are interested in getting header
+values, the status code, or other data from the response you will need to get the ``Response`` object from the
+``Command`` object (see :doc:`feature-commands`). You may need to switch from using the shorthand command syntax to the
+expanded syntax so that the command object can be accessed directly.
+
+.. code-block:: php
+
+ // Getting the response Model with the shorthand syntax
+ $result = $s3Client->createBucket(array(/* ... */));
+
+ // Getting the response Model with the expanded syntax
+ $command = $s3Client->getCommand('CreateBucket', array(/* ... */));
+ $result = $command->getResult();
+
+ // Getting the Response object from the Command
+ $response = $command->getResponse();
+ $contentLength = $response->getHeader('Content-Length');
+ $statusCode = $response->getStatusCode();
+
+In some cases, particularly with REST-like services like Amazon S3 and Amazon Glacier, most of the important headers are
+already included in the response model.
diff --git a/vendor/aws/aws-sdk-php/docs/feature-s3-stream-wrapper.rst b/vendor/aws/aws-sdk-php/docs/feature-s3-stream-wrapper.rst
new file mode 100644
index 0000000..baa9222
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/feature-s3-stream-wrapper.rst
@@ -0,0 +1,279 @@
+========================
+Amazon S3 Stream Wrapper
+========================
+
+Introduction
+------------
+
+The Amazon S3 stream wrapper allows you to store and retrieve data from Amazon S3 using built-in PHP functions like
+``file_get_contents``, ``fopen``, ``copy``, ``rename``, ``unlink``, ``mkdir``, ``rmdir``, etc.
+
+You need to register the Amazon S3 stream wrapper in order to use it:
+
+.. code-block:: php
+
+ // Register the stream wrapper from an S3Client object
+ $client->registerStreamWrapper();
+
+This allows you to access buckets and objects stored in Amazon S3 using the ``s3://`` protocol. The "s3" stream wrapper
+accepts strings that contain a bucket name followed by a forward slash and an optional object key or prefix:
+``s3://[/]``.
+
+Downloading data
+----------------
+
+You can grab the contents of an object using ``file_get_contents``. Be careful with this function though; it loads the
+entire contents of the object into memory.
+
+.. code-block:: php
+
+ // Download the body of the "key" object in the "bucket" bucket
+ $data = file_get_contents('s3://bucket/key');
+
+Use ``fopen()`` when working with larger files or if you need to stream data from Amazon S3.
+
+.. code-block:: php
+
+ // Open a stream in read-only mode
+ if ($stream = fopen('s3://bucket/key', 'r')) {
+ // While the stream is still open
+ while (!feof($stream)) {
+ // Read 1024 bytes from the stream
+ echo fread($stream, 1024);
+ }
+ // Be sure to close the stream resource when you're done with it
+ fclose($stream);
+ }
+
+Opening Seekable streams
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Streams opened in "r" mode only allow data to be read from the stream, and are not seekable by default. This is so that
+data can be downloaded from Amazon S3 in a truly streaming manner where previously read bytes do not need to be
+buffered into memory. If you need a stream to be seekable, you can pass ``seekable`` into the `stream context
+options `_ of a function.
+
+.. code-block:: php
+
+ $context = stream_context_create(array(
+ 's3' => array(
+ 'seekable' => true
+ )
+ ));
+
+ if ($stream = fopen('s3://bucket/key', 'r', false, $context)) {
+ // Read bytes from the stream
+ fread($stream, 1024);
+ // Seek back to the beginning of the stream
+ fseek($steam, 0);
+ // Read the same bytes that were previously read
+ fread($stream, 1024);
+ fclose($stream);
+ }
+
+Opening seekable streams allows you to seek only to bytes that were previously read. You cannot skip ahead to bytes
+that have not yet been read from the remote server. In order to allow previously read data to recalled, data is
+buffered in a PHP temp stream using Guzzle's
+`CachingEntityBody `_ decorator.
+When the amount of cached data exceed 2MB, the data in the temp stream will transfer from memory to disk. Keep this in
+mind when downloading large files from Amazon S3 using the ``seekable`` stream context setting.
+
+Uploading data
+--------------
+
+Data can be uploaded to Amazon S3 using ``file_put_contents()``.
+
+.. code-block:: php
+
+ file_put_contents('s3://bucket/key', 'Hello!');
+
+You can upload larger files by streaming data using ``fopen()`` and a "w", "x", or "a" stream access mode. The Amazon
+S3 stream wrapper does **not** support simultaneous read and write streams (e.g. "r+", "w+", etc). This is because the
+HTTP protocol does not allow simultaneous reading and writing.
+
+.. code-block:: php
+
+ $stream = fopen('s3://bucket/key', 'w');
+ fwrite($stream, 'Hello!');
+ fclose($stream);
+
+.. note::
+
+ Because Amazon S3 requires a Content-Length header to be specified before the payload of a request is sent, the
+ data to be uploaded in a PutObject operation is internally buffered using a PHP temp stream until the stream is
+ flushed or closed.
+
+fopen modes
+-----------
+
+PHP's `fopen() `_ function requires that a ``$mode`` option is specified.
+The mode option specifies whether or not data can be read or written to a stream and if the file must exist when
+opening a stream. The Amazon S3 stream wrapper supports the following modes:
+
+= ======================================================================================================================
+r A read only stream where the file must already exist.
+w A write only stream. If the file already exists it will be overwritten.
+a A write only stream. If the file already exists, it will be downloaded to a temporary stream and any writes to
+ the stream will be appended to any previously uploaded data.
+x A write only stream. An error is raised if the file does not already exist.
+= ======================================================================================================================
+
+Other object functions
+----------------------
+
+Stream wrappers allow many different built-in PHP functions to work with a custom system like Amazon S3. Here are some
+of the functions that the Amazon S3 stream wrapper allows you to perform with objects stored in Amazon S3.
+
+=============== ========================================================================================================
+unlink() Delete an object from a bucket.
+
+ .. code-block:: php
+
+ // Delete an object from a bucket
+ unlink('s3://bucket/key');
+
+ You can pass in any options available to the ``DeleteObject`` operation to modify how the object is
+ deleted (e.g. specifying a specific object version).
+
+ .. code-block:: php
+
+ // Delete a specific version of an object from a bucket
+ unlink('s3://bucket/key', stream_context_create(array(
+ 's3' => array('VersionId' => '123')
+ ));
+
+filesize() Get the size of an object.
+
+ .. code-block:: php
+
+ // Get the Content-Length of an object
+ $size = filesize('s3://bucket/key', );
+
+is_file() Checks if a URL is a file.
+
+ .. code-block:: php
+
+ if (is_file('s3://bucket/key')) {
+ echo 'It is a file!';
+ }
+
+file_exists() Checks if an object exists.
+
+ .. code-block:: php
+
+ if (file_exists('s3://bucket/key')) {
+ echo 'It exists!';
+ }
+
+filetype() Checks if a URL maps to a file or bucket (dir).
+file() Load the contents of an object in an array of lines. You can pass in any options available to the
+ ``GetObject`` operation to modify how the file is downloaded.
+filemtime() Get the last modified date of an object.
+rename() Rename an object by copying the object then deleting the original. You can pass in options available to
+ the ``CopyObject`` and ``DeleteObject`` operations to the stream context parameters to modify how the
+ object is copied and deleted.
+copy() Copy an object from one location to another. You can pass options available to the ``CopyObject``
+ operation into the stream context options to modify how the object is copied.
+
+ .. code-block:: php
+
+ // Copy a file on Amazon S3 to another bucket
+ copy('s3://bucket/key', 's3://other_bucket/key');
+
+=============== ========================================================================================================
+
+Working with buckets
+--------------------
+
+You can modify and browse Amazon S3 buckets similar to how PHP allows the modification and traversal of directories on
+your filesystem.
+
+Here's an example of creating a bucket:
+
+.. code-block:: php
+
+ mkdir('s3://bucket');
+
+You can pass in stream context options to the ``mkdir()`` method to modify how the bucket is created using the
+parameters available to the
+`CreateBucket `_ operation.
+
+.. code-block:: php
+
+ // Create a bucket in the EU region
+ mkdir('s3://bucket', stream_context_create(array(
+ 's3' => array(
+ 'LocationConstraint' => 'eu-west-1'
+ )
+ ));
+
+You can delete buckets using the ``rmdir()`` function.
+
+.. code-block:: php
+
+ // Delete a bucket
+ rmdir('s3://bucket');
+
+.. note::
+
+ A bucket can only be deleted if it is empty.
+
+Listing the contents of a bucket
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The `opendir() `_,
+`readdir() `_,
+`rewinddir() `_, and
+`closedir() `_ PHP functions can be used with the Amazon S3 stream
+wrapper to traverse the contents of a bucket. You can pass in parameters available to the
+`ListObjects `_ operation as
+custom stream context options to the ``opendir()`` function to modify how objects are listed.
+
+.. code-block:: php
+
+ $dir = "s3://bucket/";
+
+ if (is_dir($dir) && ($dh = opendir($dir))) {
+ while (($file = readdir($dh)) !== false) {
+ echo "filename: {$file} : filetype: " . filetype($dir . $file) . "\n";
+ }
+ closedir($dh);
+ }
+
+You can recursively list each object and prefix in a bucket using PHP's
+`RecursiveDirectoryIterator `_.
+
+.. code-block:: php
+
+ $dir = 's3://bucket';
+ $iterator = new RecursiveIteratorIterator(new RecursiveDirectoryIterator($dir));
+
+ foreach ($iterator as $file) {
+ echo $file->getType() . ': ' . $file . "\n";
+ }
+
+Another easy way to list the contents of the bucket is using the
+`Symfony2 Finder component `_.
+
+.. code-block:: php
+
+ get('s3')->registerStreamWrapper();
+
+ $finder = new Finder();
+
+ // Get all files and folders (key prefixes) from "bucket" that are less than 100k
+ // and have been updated in the last year
+ $finder->in('s3://bucket')
+ ->size('< 100K')
+ ->date('since 1 year ago');
+
+ foreach ($finder as $file) {
+ echo $file->getType() . ": {$file}\n";
+ }
diff --git a/vendor/aws/aws-sdk-php/docs/feature-waiters.rst b/vendor/aws/aws-sdk-php/docs/feature-waiters.rst
new file mode 100644
index 0000000..2526e4c
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/feature-waiters.rst
@@ -0,0 +1,175 @@
+=======
+Waiters
+=======
+
+Introduction
+------------
+
+.. include:: _snippets/waiters-intro.txt
+
+If the Waiter has to poll the bucket too many times, it will throw an ``Aws\Common\Exception\RuntimeException``
+exception.
+
+Basic Configuration
+-------------------
+
+You can tune the number of polling attempts issued by a Waiter or the number of seconds to delay between each poll by
+passing optional values prefixed with "waiter.":
+
+.. code-block:: php
+
+ $s3Client->waitUntilBucketExists(array(
+ 'Bucket' => 'my-bucket',
+ 'waiter.interval' => 10,
+ 'waiter.max_attempts' => 3
+ ));
+
+Waiter Objects
+--------------
+
+To interact with the Waiter object directly, you must use the ``getWaiter()`` method. The following code is equivalent
+to the example in the preceding section.
+
+.. code-block:: php
+
+ $bucketExistsWaiter = $s3Client->getWaiter('BucketExists')
+ ->setConfig(array('Bucket' => 'my-bucket'))
+ ->setInterval(10)
+ ->setMaxAttempts(3);
+ $bucketExistsWaiter->wait();
+
+Waiter Events
+-------------
+
+One benefit of working directly with the Waiter object is that you can attach event listeners. Waiters emit up to two
+events in each **wait cycle**. A wait cycle does the following:
+
+#. Dispatch the ``waiter.before_attempt`` event.
+#. Attempt to resolve the wait condition by making a request to the service and checking the result.
+#. If the wait condition is resolved, the wait cycle exits. If ``max_attempts`` is reached, an exception is thrown.
+#. Dispatch the ``waiter.before_wait`` event.
+#. Sleep ``interval`` amount of seconds.
+
+Waiter objects extend the ``Guzzle\Common\AbstractHasDispatcher`` class which exposes the ``addSubscriber()`` method and
+``getEventDispatcher()`` method. To attach listeners, you can use the following example, which is a modified version of
+the previous one.
+
+.. code-block:: php
+
+ // Get and configure the Waiter object
+ $waiter = $s3Client->getWaiter('BucketExists')
+ ->setConfig(array('Bucket' => 'my-bucket'))
+ ->setInterval(10)
+ ->setMaxAttempts(3);
+
+ // Get the event dispatcher and register listeners for both events emitted by the Waiter
+ $dispatcher = $waiter->getEventDispatcher();
+ $dispatcher->addListener('waiter.before_attempt', function () {
+ echo "Checking if the wait condition has been met…\n";
+ });
+ $dispatcher->addListener('waiter.before_wait', function () use ($waiter) {
+ $interval = $waiter->getInterval();
+ echo "Sleeping for {$interval} seconds…\n";
+ });
+
+ $waiter->wait();
+
+Custom Waiters
+--------------
+
+It is possible to implement custom Waiter objects if your use case requires application-specific Waiter logic or Waiters
+that are not yet supported by the SDK. You can use the ``getWaiterFactory()`` and ``setWaiterFactory()`` methods on the
+client to manipulate the Waiter factory used by the client such that your custom Waiter can be instantiated. By default
+the service clients use a ``Aws\Common\Waiter\CompositeWaiterFactory`` which allows you to add additional factories if
+needed. The following example shows how to implement a contrived custom Waiter class and then modify a client's Waiter
+factory such that it can create instances of the custom Waiter.
+
+.. code-block:: php
+
+ namespace MyApp\FakeWaiters
+ {
+ use Aws\Common\Waiter\AbstractResourceWaiter;
+
+ class SleptThreeTimes extends AbstractResourceWaiter
+ {
+ public function doWait()
+ {
+ if ($this->attempts < 3) {
+ echo "Need to sleep…\n";
+ return false;
+ } else {
+ echo "Now I've slept 3 times.\n";
+ return true;
+ }
+ }
+ }
+ }
+
+ namespace
+ {
+ use Aws\S3\S3Client;
+ use Aws\Common\Waiter\WaiterClassFactory;
+
+ $s3Client = S3Client::factory();
+
+ $compositeFactory = $s3Client->getWaiterFactory();
+ $compositeFactory->addFactory(new WaiterClassFactory('MyApp\FakeWaiters'));
+
+ $waiter = $s3Client->waitUntilSleptThreeTimes();
+ }
+
+The result of this code should look like the following::
+
+ Need to sleep…
+ Need to sleep…
+ Need to sleep…
+ Now I've slept 3 times.
+
+Waiter Definitions
+------------------
+
+The Waiters that are included in the SDK are defined in the service description for their client. They are defined
+using a configuration DSL (domain-specific language) that describes the default wait intervals, wait conditions, and
+how to check or poll the resource to resolve the condition.
+
+This data is automatically consumed and used by the ``Aws\Common\Waiter\WaiterConfigFactory`` class when a client is
+instantiated so that the waiters defined in the service description are available to the client.
+
+The following is an excerpt of the Amazon Glacier service description that defines the Waiters provided by
+``Aws\Glacier\GlacierClient``.
+
+.. code-block:: php
+
+ return array(
+ // ...
+
+ 'waiters' => array(
+ '__default__' => array(
+ 'interval' => 3,
+ 'max_attempts' => 15,
+ ),
+ '__VaultState' => array(
+ 'operation' => 'DescribeVault',
+ ),
+ 'VaultExists' => array(
+ 'extends' => '__VaultState',
+ 'success.type' => 'output',
+ 'description' => 'Wait until a vault can be accessed.',
+ 'ignore_errors' => array(
+ 'ResourceNotFoundException',
+ ),
+ ),
+ 'VaultNotExists' => array(
+ 'extends' => '__VaultState',
+ 'description' => 'Wait until a vault is deleted.',
+ 'success.type' => 'error',
+ 'success.value' => 'ResourceNotFoundException',
+ ),
+ ),
+
+ // ...
+ );
+
+In order for you to contribute Waiters to the SDK, you will need to implement them using the Waiters DSL. The DSL is not
+documented yet, since it is currently subject to change, so if you are interested in helping to implement more Waiters,
+please reach out to us via `GitHub `_.
diff --git a/vendor/aws/aws-sdk-php/docs/index.rst b/vendor/aws/aws-sdk-php/docs/index.rst
new file mode 100644
index 0000000..fa03a4c
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/index.rst
@@ -0,0 +1,282 @@
+===============
+AWS SDK for PHP
+===============
+
+.. toctree::
+ :hidden:
+
+ awssignup
+ requirements
+ installation
+ quick-start
+ migration-guide
+ side-by-side
+
+ credentials
+ configuration
+ feature-commands
+ feature-waiters
+ feature-iterators
+ feature-models
+ feature-facades
+ performance
+ faq
+
+ service-autoscaling
+ service-cloudformation
+ service-cloudfront
+ service-cloudfront-20120505
+ service-cloudsearch
+ service-cloudtrail
+ service-cloudwatch
+ service-datapipeline
+ service-directconnect
+ service-dynamodb
+ service-dynamodb-20111205
+ service-ec2
+ service-elasticache
+ service-elasticbeanstalk
+ service-elasticloadbalancing
+ service-elastictranscoder
+ service-emr
+ service-glacier
+ service-iam
+ service-importexport
+ service-kinesis
+ service-opsworks
+ service-rds
+ service-redshift
+ service-route53
+ service-s3
+ service-ses
+ service-simpledb
+ service-sns
+ service-sqs
+ service-storagegateway
+ service-sts
+ service-support
+ service-swf
+ feature-dynamodb-session-handler
+ feature-s3-stream-wrapper
+
+The **AWS SDK for PHP** enables PHP developers to use `Amazon Web Services `_ from their PHP
+code, and build robust applications and software using services like Amazon S3, Amazon DynamoDB, Amazon Glacier, etc.
+You can get started in minutes by installing the SDK through Composer — by requiring the ``aws/aws-sdk-php`` package —
+or by downloading the standalone `aws.zip `_ or
+`aws.phar `_ files.
+
+Getting Started
+---------------
+
+* Before you use the SDK
+
+ * `Sign up for AWS and get your AWS access keys `_
+ * :doc:`Verify that your system meets the minimum requirements for the SDK `
+ * :doc:`Install the AWS SDK for PHP `
+
+* Using the SDK
+
+ * :doc:`quick-start` – Everything you need to know to use the AWS SDK for PHP
+ * `Sample Project `_
+
+* Migrating from Version 1 of the SDK?
+
+ * :doc:`migration-guide` – Migrating from Version 1 of the SDK to Version 2
+ * :doc:`side-by-side` – Using Version 1 and Version 2 of the SDK side-by-side in the same project
+
+In-Depth Guides
+---------------
+
+* :doc:`credentials`
+* :doc:`configuration`
+* SDK Features
+
+ * :doc:`feature-iterators`
+ * :doc:`feature-waiters`
+ * :doc:`feature-commands`
+ * :ref:`Parallel Commands `
+ * :doc:`feature-models`
+
+* :doc:`faq`
+* :doc:`performance`
+* `Contributing to the SDK `_
+* `Guzzle V3 Documentation `_
+
+.. _supported-services:
+
+Service-Specific Guides
+-----------------------
+
+* Amazon CloudFront
+
+ .. indexlinks:: CloudFront
+
+ * :doc:`Using the older 2012-05-05 API version `
+
+* Amazon CloudSearch
+
+ .. indexlinks:: CloudSearch
+
+ * :doc:`Using the older 2011-02-01 API version `
+
+* Amazon CloudWatch
+
+ .. indexlinks:: CloudWatch
+
+* Amazon DynamoDB
+
+ .. indexlinks:: DynamoDb
+
+ * :doc:`Special Feature: DynamoDB Session Handler `
+ * :doc:`Using the older 2011-12-05 API version `
+
+* Amazon Elastic Compute Cloud (Amazon EC2)
+
+ .. indexlinks:: Ec2
+
+* Amazon Elastic MapReduce (Amazon EMR)
+
+ .. indexlinks:: Emr
+
+* Amazon Elastic Transcoder
+
+ .. indexlinks:: ElasticTranscoder
+
+* Amazon ElastiCache
+
+ .. indexlinks:: ElastiCache
+
+* Amazon Glacier
+
+ .. indexlinks:: Glacier
+
+* Amazon Kinesis
+
+ .. indexlinks:: Kinesis
+
+* Amazon Redshift
+
+ .. indexlinks:: Redshift
+
+* Amazon Relational Database Service (Amazon RDS)
+
+ .. indexlinks:: Rds
+
+* Amazon Route 53
+
+ .. indexlinks:: Route53
+
+* Amazon Simple Email Service (Amazon SES)
+
+ .. indexlinks:: Ses
+
+* Amazon Simple Notification Service (Amazon SNS)
+
+ .. indexlinks:: Sns
+
+* Amazon Simple Queue Service (Amazon SQS)
+
+ .. indexlinks:: Sqs
+
+* Amazon Simple Storage Service (Amazon S3)
+
+ .. indexlinks:: S3
+
+ * :doc:`Special Feature: Amazon S3 Stream Wrapper `
+
+* Amazon Simple Workflow Service (Amazon SWF)
+
+ .. indexlinks:: Swf
+
+* Amazon SimpleDB
+
+ .. indexlinks:: SimpleDb
+
+* Auto Scaling
+
+ .. indexlinks:: AutoScaling
+
+* AWS CloudFormation
+
+ .. indexlinks:: CloudFormation
+
+* AWS CloudTrail
+
+ .. indexlinks:: CloudTrail
+
+* AWS Data Pipeline
+
+ .. indexlinks:: DataPipeline
+
+* AWS Direct Connect
+
+ .. indexlinks:: DirectConnect
+
+* AWS Elastic Beanstalk
+
+ .. indexlinks:: ElasticBeanstalk
+
+* AWS Identity and Access Management (AWS IAM)
+
+ .. indexlinks:: Iam
+
+* AWS Import/Export
+
+ .. indexlinks:: ImportExport
+
+* AWS OpsWorks
+
+ .. indexlinks:: OpsWorks
+
+* AWS Security Token Service (AWS STS)
+
+ .. indexlinks:: Sts
+
+* AWS Storage Gateway
+
+ .. indexlinks:: StorageGateway
+
+* AWS Support
+
+ .. indexlinks:: Support
+
+* Elastic Load Balancing
+
+ .. indexlinks:: ElasticLoadBalancing
+
+Articles from the Blog
+----------------------
+
+* `Syncing Data with Amazon S3 `_
+* `Amazon S3 PHP Stream Wrapper `_
+* `Transferring Files To and From Amazon S3 `_
+* `Provision an Amazon EC2 Instance with PHP `_
+* `Uploading Archives to Amazon Glacier from PHP `_
+* `Using AWS CloudTrail in PHP - Part 1 `_
+* `Using AWS CloudTrail in PHP - Part 2 `_
+* `Providing credentials to the AWS SDK for PHP `_
+* `Using Credentials from AWS Security Token Service `_
+* `Iterating through Amazon DynamoDB Results `_
+* `Sending requests through a proxy `_
+* `Wire Logging in the AWS SDK for PHP `_
+* `Streaming Amazon S3 Objects From a Web Server `_
+* `Using New Regions and Endpoints `_
+* `Receiving Amazon SNS Messages in PHP `_
+* `Testing Webhooks Locally for Amazon SNS `_
+
+Presentations
+-------------
+
+Slides
+~~~~~~
+
+* `Mastering the AWS SDK for PHP `_
+* `Getting Good with the AWS SDK for PHP `_
+* `Using DynamoDB with the AWS SDK for PHP `_
+* `Controlling the AWS Cloud with PHP `_
+
+Videos
+~~~~~~
+
+* `Mastering the AWS SDK for PHP `_ (AWS re:Invent 2013)
+* `Using DynamoDB with the AWS SDK for PHP `_ (AWS re:Invent 2012)
diff --git a/vendor/aws/aws-sdk-php/docs/installation.rst b/vendor/aws/aws-sdk-php/docs/installation.rst
new file mode 100644
index 0000000..ee57300
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/installation.rst
@@ -0,0 +1,150 @@
+============
+Installation
+============
+
+Installing via Composer
+-----------------------
+
+Using `Composer `_ is the recommended way to install the AWS SDK for PHP. Composer is a
+dependency management tool for PHP that allows you to declare the dependencies your project needs and installs them into
+your project. In order to use the SDK with Composer, you must do the following:
+
+#. Add ``"aws/aws-sdk-php"`` as a dependency in your project's ``composer.json`` file.
+
+ .. code-block:: js
+
+ {
+ "require": {
+ "aws/aws-sdk-php": "2.*"
+ }
+ }
+
+ Consider tightening your dependencies to a known version (e.g., ``2.5.*``).
+
+#. Download and install Composer.
+
+ .. code-block:: sh
+
+ curl -sS https://getcomposer.org/installer | php
+
+#. Install your dependencies.
+
+ .. code-block:: sh
+
+ php composer.phar install
+
+#. Require Composer's autoloader.
+
+ Composer prepares an autoload file that's capable of autoloading all of the classes in any of the libraries that
+ it downloads. To use it, just add the following line to your code's bootstrap process.
+
+ .. code-block:: php
+
+ require '/path/to/sdk/vendor/autoload.php';
+
+You can find out more on how to install Composer, configure autoloading, and other best-practices for defining
+dependencies at `getcomposer.org `_.
+
+During your development, you can keep up with the latest changes on the master branch by setting the version
+requirement for the SDK to ``dev-master``.
+
+.. code-block:: js
+
+ {
+ "require": {
+ "aws/aws-sdk-php": "dev-master"
+ }
+ }
+
+If you are deploying your application to `AWS Elastic Beanstalk
+`_, and you have a ``composer.json``
+file in the root of your package, then Elastic Beanstalk will automatically perform a Composer ``install`` when you
+deploy your application.
+
+Installing via Phar
+-------------------
+
+Each release of the AWS SDK for PHP ships with a pre-packaged `phar `_ (PHP
+archive) file containing all of the classes and dependencies you need to run the SDK. Additionally, the phar file
+automatically registers a class autoloader for the AWS SDK for PHP and all of its dependencies when included. Bundled
+with the phar file are the following required and suggested libraries:
+
+- `Guzzle `_ for HTTP requests
+- `Symfony2 EventDispatcher `_ for events
+- `Monolog `_ and `Psr\\Log `_ for logging
+- `Doctrine `_ for caching
+
+You can `download the packaged Phar `_ and simply include it in your
+scripts to get started::
+
+ require '/path/to/aws.phar';
+
+If you have `phing `_ installed, you can clone the SDK and build a phar file yourself using the
+*"phar"* task.
+
+.. note::
+
+ If you are using PHP with the Suhosin patch (especially common on Ubuntu and Debian distributions), you may need
+ to enable the use of phars in the ``suhosin.ini``. Without this, including a phar file in your code will cause it to
+ silently fail. You should modify the ``suhosin.ini`` file by adding the line:
+
+ ``suhosin.executor.include.whitelist = phar``
+
+Installing via Zip
+------------------
+
+Each release of the AWS SDK for PHP (since 2.3.2) ships with a zip file containing all of the classes and dependencies
+you need to run the SDK in a `PSR-0 `_
+compatible directory structure. Additionally, the zip file includes a class autoloader for the AWS SDK for PHP and the
+following required and suggested libraries:
+
+- `Guzzle `_ for HTTP requests
+- `Symfony2 EventDispatcher `_ for events
+- `Monolog `_ and `Psr\\Log `_ for logging
+- `Doctrine `_ for caching
+
+Using the zip file is great if you:
+
+1. Prefer not to or cannot use package managers like Composer and PEAR.
+2. Cannot use phar files due to environment limitations.
+3. Want to use only specific files from the SDK.
+
+To get started, you must `download the zip file `_, unzip it into your
+project to a location of your choosing, and include the autoloader::
+
+ require '/path/to/aws-autoloader.php';
+
+Alternatively, you can write your own autoloader or use an existing one from your project.
+
+If you have `phing `_ installed, you can clone the SDK and build a zip file yourself using the
+*"zip"* task.
+
+Installing via PEAR
+~~~~~~~~~~~~~~~~~~~
+
+`PEAR `_ packages are easy to install, and are available in your PHP environment path so that they
+are accessible to any PHP project. PEAR packages are not specific to your project, but rather to the machine they're
+installed on.
+
+From the command-line, you can install the SDK with PEAR as follows (this might need to be run as ``sudo``):
+
+.. code-block:: sh
+
+ pear config-set auto_discover 1
+ pear channel-discover pear.amazonwebservices.com
+ pear install aws/sdk
+
+Alternatively, you can combine all three of the preceding statements into one by doing the following:
+
+.. code-block:: sh
+
+ pear -D auto_discover=1 install pear.amazonwebservices.com/sdk
+
+Once the SDK has been installed via PEAR, you can include the ``aws.phar`` into your project with:
+
+.. code-block:: php
+
+ require 'AWSSDKforPHP/aws.phar';
+
+This assumes that the PEAR directory is in your PHP include path, which it probably is, if PEAR is working correctly.
+If needed, you can determine your PEAR directory by running ``pear config-get php_dir``.
diff --git a/vendor/aws/aws-sdk-php/docs/make.bat b/vendor/aws/aws-sdk-php/docs/make.bat
new file mode 100644
index 0000000..a5573a7
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/make.bat
@@ -0,0 +1,190 @@
+@ECHO OFF
+
+REM Command file for Sphinx documentation
+
+if "%SPHINXBUILD%" == "" (
+ set SPHINXBUILD=sphinx-build
+)
+set BUILDDIR=_build
+set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% .
+set I18NSPHINXOPTS=%SPHINXOPTS% .
+if NOT "%PAPER%" == "" (
+ set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
+ set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS%
+)
+
+if "%1" == "" goto help
+
+if "%1" == "help" (
+ :help
+ echo.Please use `make ^` where ^ is one of
+ echo. html to make standalone HTML files
+ echo. dirhtml to make HTML files named index.html in directories
+ echo. singlehtml to make a single large HTML file
+ echo. pickle to make pickle files
+ echo. json to make JSON files
+ echo. htmlhelp to make HTML files and a HTML help project
+ echo. qthelp to make HTML files and a qthelp project
+ echo. devhelp to make HTML files and a Devhelp project
+ echo. epub to make an epub
+ echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter
+ echo. text to make text files
+ echo. man to make manual pages
+ echo. texinfo to make Texinfo files
+ echo. gettext to make PO message catalogs
+ echo. changes to make an overview over all changed/added/deprecated items
+ echo. linkcheck to check all external links for integrity
+ echo. doctest to run all doctests embedded in the documentation if enabled
+ goto end
+)
+
+if "%1" == "clean" (
+ for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
+ del /q /s %BUILDDIR%\*
+ goto end
+)
+
+if "%1" == "html" (
+ %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Build finished. The HTML pages are in %BUILDDIR%/html.
+ goto end
+)
+
+if "%1" == "dirhtml" (
+ %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
+ goto end
+)
+
+if "%1" == "singlehtml" (
+ %SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.
+ goto end
+)
+
+if "%1" == "pickle" (
+ %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Build finished; now you can process the pickle files.
+ goto end
+)
+
+if "%1" == "json" (
+ %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Build finished; now you can process the JSON files.
+ goto end
+)
+
+if "%1" == "htmlhelp" (
+ %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Build finished; now you can run HTML Help Workshop with the ^
+.hhp project file in %BUILDDIR%/htmlhelp.
+ goto end
+)
+
+if "%1" == "qthelp" (
+ %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Build finished; now you can run "qcollectiongenerator" with the ^
+.qhcp project file in %BUILDDIR%/qthelp, like this:
+ echo.^> qcollectiongenerator %BUILDDIR%\qthelp\AWSSDKforPHP.qhcp
+ echo.To view the help file:
+ echo.^> assistant -collectionFile %BUILDDIR%\qthelp\AWSSDKforPHP.ghc
+ goto end
+)
+
+if "%1" == "devhelp" (
+ %SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Build finished.
+ goto end
+)
+
+if "%1" == "epub" (
+ %SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Build finished. The epub file is in %BUILDDIR%/epub.
+ goto end
+)
+
+if "%1" == "latex" (
+ %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Build finished; the LaTeX files are in %BUILDDIR%/latex.
+ goto end
+)
+
+if "%1" == "text" (
+ %SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Build finished. The text files are in %BUILDDIR%/text.
+ goto end
+)
+
+if "%1" == "man" (
+ %SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Build finished. The manual pages are in %BUILDDIR%/man.
+ goto end
+)
+
+if "%1" == "texinfo" (
+ %SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo.
+ goto end
+)
+
+if "%1" == "gettext" (
+ %SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Build finished. The message catalogs are in %BUILDDIR%/locale.
+ goto end
+)
+
+if "%1" == "changes" (
+ %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.The overview file is in %BUILDDIR%/changes.
+ goto end
+)
+
+if "%1" == "linkcheck" (
+ %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Link check complete; look for any errors in the above output ^
+or in %BUILDDIR%/linkcheck/output.txt.
+ goto end
+)
+
+if "%1" == "doctest" (
+ %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Testing of doctests in the sources finished, look at the ^
+results in %BUILDDIR%/doctest/output.txt.
+ goto end
+)
+
+:end
diff --git a/vendor/aws/aws-sdk-php/docs/migration-guide.rst b/vendor/aws/aws-sdk-php/docs/migration-guide.rst
new file mode 100644
index 0000000..d20191f
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/migration-guide.rst
@@ -0,0 +1,480 @@
+===============
+Migration Guide
+===============
+
+This guide shows how to migrate your code to use the new AWS SDK for PHP and how the new SDK differs from the
+AWS SDK for PHP - Version 1.
+
+Introduction
+------------
+
+The PHP language and community have evolved significantly over the past few years. Since the inception of the AWS SDK
+for PHP, PHP has gone through two major version changes (`versions 5.3 and 5.4 `_) and
+many in the PHP community have unified behind the recommendations of the `PHP Framework Interop Group
+`_. Consequently, we decided to make breaking changes to the SDK in order to align with the more
+modern patterns used in the PHP community.
+
+For the new release, we rewrote the SDK from the ground up to address popular customer requests. The new SDK is built on
+top of the `Guzzle HTTP client framework `_, which provides increased performance and enables
+event-driven customization. We also introduced high-level abstractions to make programming common tasks easy. The SDK
+is compatible with PHP 5.3.3 and newer, and follows the PSR-0 standard for namespaces and autoloading.
+
+Which Services are Supported?
+-----------------------------
+
+The AWS SDK for PHP supports all of the AWS services supported by Version 1 of the SDK and more, including Amazon
+Route 53, Amazon Glacier, and AWS Direct Connect. See the `AWS SDK for PHP website `_
+for the full list of services supported by the SDK. Be sure to watch or star our `AWS SDK for PHP GitHub repository
+`_ to stay up-to-date with the latest changes.
+
+What's New?
+-----------
+
+- `PHP 5.3 namespaces `_
+- Follows `PSR-0, PSR-1, and PSR-2 standards `_
+- Built on `Guzzle `_ and utilizes the Guzzle feature set
+- Persistent connection management for both serial and parallel requests
+- Event hooks (via `Symfony2 EventDispatcher
+ `_) for event-driven, custom behavior
+- Request and response entity bodies are stored in ``php://temp`` streams to reduce memory usage
+- Transient networking and cURL failures are automatically retried using truncated exponential backoff
+- Plug-ins for over-the-wire logging and response caching
+- "Waiter" objects that allow you to poll a resource until it is in a desired state
+- Resource iterator objects for easily iterating over paginated responses
+- Service-specific sets of exceptions
+- Modeled responses with a simpler interface
+- Grouped constants (Enums) for service parameter options
+- Flexible request batching system
+- Service builder/container that supports easy configuration and dependency injection
+- Full unit test suite with extensive code coverage
+- `Composer `_ support (including PSR-0 compliance) for installing and autoloading SDK
+ dependencies
+- `Phing `_ ``build.xml`` for installing dev tools, driving testing, and producing ``.phar`` files
+- Fast Amazon DynamoDB batch PutItem and DeleteItem system
+- Multipart upload system for Amazon Simple Storage Service (Amazon S3) and Amazon Glacier that can be paused and
+ resumed
+- Redesigned DynamoDB Session Handler with smarter writing and garbage collection
+- Improved multi-region support
+
+What's Different?
+-----------------
+
+Architecture
+~~~~~~~~~~~~
+
+The new SDK is built on top of `Guzzle `_ and inherits its features and
+conventions. Every AWS service client extends the Guzzle client, defining operations through a service description
+file. The SDK has a much more robust and flexible object-oriented architecture, including the use of design patterns,
+event dispatching and dependency injection. As a result, many of the classes and methods from the previous SDK have
+been changed.
+
+Project Dependencies
+~~~~~~~~~~~~~~~~~~~~
+
+Unlike the Version 1 of the SDK, the new SDK does not pre-package all of its dependencies
+in the repository. Dependencies are best resolved and autoloaded via `Composer `_. However,
+when installing the SDK via the downloadable phar, the dependencies are resolved for you.
+
+Namespaces
+~~~~~~~~~~
+
+The SDK's directory structure and namespaces are organized according to `PSR-0 standards
+`_, making the SDK inherently modular. The
+``Aws\Common`` namespace contains the core code of the SDK, and each service client is contained in its own separate
+namespace (e.g., ``Aws\DynamoDb``).
+
+Coding Standards
+~~~~~~~~~~~~~~~~
+
+The SDK adopts the PSR standards produced by the PHP Framework Interop Group. An immediately
+noticeable change is that all method names are now named using lower camel-case
+(e.g., ``putObject`` instead of ``put_object``).
+
+Required Regions
+~~~~~~~~~~~~~~~~
+
+The `region `_ must be provided to instantiate a client
+(except in the case where the service has a single endpoint like Amazon CloudFront). The AWS region you select may
+affect both your performance and costs.
+
+Client Factories
+~~~~~~~~~~~~~~~~
+
+Factory methods instantiate service clients and do the work of setting up the signature,
+exponential backoff settings, exception handler, and so forth. At a minimum you must provide your access key, secret
+key, and region to the client factory, but there are many other settings you can use to customize the client
+behavior.
+
+.. code-block:: php
+
+ $dynamodb = Aws\DynamoDb\DynamoDbClient::factory(array(
+ 'key' => 'your-aws-access-key-id',
+ 'secret' => 'your-aws-secret-access-key',
+ 'region' => 'us-west-2',
+ ));
+
+Configuration
+~~~~~~~~~~~~~
+
+A global configuration file can be used to inject credentials into clients
+automatically via the service builder. The service builder acts as a dependency injection container for the service
+clients. (**Note:** The SDK does not automatically attempt to load the configuration file like in Version 1 of the
+SDK.)
+
+.. code-block:: php
+
+ $aws = Aws\Common\Aws::factory('/path/to/custom/config.php');
+ $s3 = $aws->get('s3');
+
+This technique is the preferred way for instantiating service clients. Your ``config.php`` might look similar to the
+following:
+
+.. code-block:: php
+
+ array('_aws'),
+ 'services' => array(
+ 'default_settings' => array(
+ 'params' => array(
+ 'key' => 'your-aws-access-key-id',
+ 'secret' => 'your-aws-secret-access-key',
+ 'region' => 'us-west-2'
+ )
+ )
+ )
+ );
+
+The line that says ``'includes' => array('_aws')`` includes the default configuration file packaged with the SDK. This
+sets up all of the service clients for you so you can retrieve them by name with the ``get()`` method of the service
+builder.
+
+Service Operations
+~~~~~~~~~~~~~~~~~~
+
+Executing operations in the new SDK is similar to how it was in the previous SDK, with two
+main differences. First, operations follow the lower camel-case naming convention. Second, a single array parameter is
+used to pass in all of the operation options. The following examples show the Amazon S3 ``PutObject`` operation
+performed in each SDK:
+
+.. code-block:: php
+
+ // Previous SDK - PutObject operation
+ $s3->create_object('bucket-name', 'object-key.txt', array(
+ 'body' => 'lorem ipsum'
+ ));
+
+.. code-block:: php
+
+ // New SDK - PutObject operation
+ $result = $s3->putObject(array(
+ 'Bucket' => 'bucket-name',
+ 'Key' => 'object-key.txt',
+ 'Body' => 'lorem ipsum'
+ ));
+
+In the new SDK, the ``putObject()`` method doesn't actually exist as a method on the client. It is implemented using
+the ``__call()`` magic method of the client and acts as a shortcut to instantiate a command, execute the command,
+and retrieve the result.
+
+A ``Command`` object encapsulates the request and response of the call to AWS. From the ``Command`` object, you can
+call the ``getResult()`` method (as in the preceding example) to retrieve the parsed result, or you can call the
+``getResponse()`` method to retrieve data about the response (e.g., the status code or the raw response).
+
+The ``Command`` object can also be useful when you want to manipulate the command before execution or need to execute
+several commands in parallel. The following is an example of the same ``PutObject`` operation using the command
+syntax:
+
+.. code-block:: php
+
+ $command = $s3->getCommand('PutObject', array(
+ 'Bucket' => 'bucket-name',
+ 'Key' => 'object-key.txt',
+ 'Body' => 'lorem ipsum'
+ ));
+ $result = $command->getResult();
+
+Or you can use the chainable ``set()`` method on the ``Command`` object:
+
+.. code-block:: php
+
+ $result = $s3->getCommand('PutObject')
+ ->set('Bucket', 'bucket-name')
+ ->set('Key', 'object-key.txt')
+ ->set('Body', 'lorem ipsum')
+ ->getResult();
+
+Responses
+~~~~~~~~~
+
+The format of responses has changed. Responses are no longer instances of the ``CFResponse`` object.
+The ``Command`` object (as seen in the preceding section) of the new SDK encapsulates the request and response, and is
+the object from which to retrieve the results.
+
+.. code-block:: php
+
+ // Previous SDK
+ // Execute the operation and get the CFResponse object
+ $response = $s3->list_tables();
+ // Get the parsed response body as a SimpleXMLElement
+ $result = $response->body;
+
+ // New SDK
+ // Executes the operation and gets the response in an array-like object
+ $result = $s3->listTables();
+
+The new syntax is similar, but a few fundamental differences exist between responses in the previous SDK and this
+version:
+
+The new SDK represents parsed responses (i.e., the results) as Guzzle ``Model`` objects instead of ``CFSimpleXML``
+objects as in the prior version. These Model objects are easy to work with since they act like arrays. They also
+have helpful built-in features such as mapping and filtering. The content of the results will also look different
+n this version of the SDK. The SDK marshals responses into the models and then transforms them into more convenient
+structures based on the service description. The API documentation details the response of all operations.
+
+Exceptions
+~~~~~~~~~~
+
+The new SDK uses exceptions to communicate errors and bad responses.
+
+Instead of relying on the ``CFResponse::isOK()`` method of the previous SDK to determine if an operation is
+successful, the new SDK throws exceptions when the operation is *not* successful. Therefore, you can assume success
+if there was no exception thrown, but you will need to add ``try...catch`` logic to your application code in order to
+handle potential errors. The following is an example of how to handle the response of an Amazon DynamoDB
+``DescribeTable`` call in the new SDK:
+
+.. code-block:: php
+
+ $tableName = 'my-table';
+ try {
+ $result = $dynamoDb->describeTable(array('TableName' => $tableName));
+
+ printf('The provisioned throughput for table "%s" is %d RCUs and %d WCUs.',
+ $tableName,
+ $result->getPath('Table/ProvisionedThroughput/ReadCapacityUnits'),
+ $result->getPath('Table/ProvisionedThroughput/WriteCapacityUnits')
+ );
+ } catch (Aws\DynamoDb\Exception\DynamoDbException $e) {
+ echo "Error describing table {$tableName}";
+ }
+
+You can get the Guzzle response object back from the command. This is helpful if you need to retrieve the status
+code, additional data from the headers, or the raw response body.
+
+.. code-block:: php
+
+ $command = $dynamoDb->getCommand('DescribeTable', array('TableName' => $tableName));
+ $statusCode = $command->getResponse()->getStatusCode();
+
+You can also get the response object and status code from the exception if one is thrown.
+
+.. code-block:: php
+
+ try {
+ $command = $dynamoDb->getCommand('DescribeTable', array(
+ 'TableName' => $tableName
+ ));
+ $statusCode = $command->getResponse()->getStatusCode();
+ } catch (Aws\DynamoDb\Exception\DynamoDbException $e) {
+ $statusCode = $e->getResponse()->getStatusCode();
+ }
+
+Iterators
+~~~~~~~~~
+
+The SDK provides iterator classes that make it easier to traverse results from list and describe type
+operations. Instead of having to code solutions that perform multiple requests in a loop and keep track of tokens or
+markers, the iterator classes do that for you. You can simply foreach over the iterator:
+
+.. code-block:: php
+
+ $objects = $s3->getIterator('ListObjects', array(
+ 'Bucket' => 'my-bucket-name'
+ ));
+
+ foreach ($objects as $object) {
+ echo $object['Key'] . PHP_EOL;
+ }
+
+Comparing Code Samples from Both SDKs
+-------------------------------------
+
+Example 1 - Amazon S3 ListParts Operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+From Version 1 of the SDK
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: php
+
+ list_parts('my-bucket-name', 'my-object-key', 'my-upload-id', array(
+ 'max-parts' => 10
+ ));
+
+ if ($response->isOK())
+ {
+ // Loop through and display the part numbers
+ foreach ($response->body->Part as $part) {
+ echo "{$part->PartNumber}\n";
+ }
+ }
+ else
+ {
+ echo "Error during S3 ListParts operation.\n";
+ }
+
+From Version 2 of the SDK
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: php
+
+ get('s3');
+
+ try {
+ $result = $s3->listParts(array(
+ 'Bucket' => 'my-bucket-name',
+ 'Key' => 'my-object-key',
+ 'UploadId' => 'my-upload-id',
+ 'MaxParts' => 10
+ ));
+
+ // Loop through and display the part numbers
+ foreach ($result['Part'] as $part) {
+ echo "{$part[PartNumber]}\n";
+ }
+ } catch (S3Exception $e) {
+ echo "Error during S3 ListParts operation.\n";
+ }
+
+Example 2 - Amazon DynamoDB Scan Operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+From Version 1 of the SDK
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: php
+
+ 'people',
+ 'AttributesToGet' => array('id', 'age', 'name'),
+ 'ScanFilter' => array(
+ 'age' => array(
+ 'ComparisonOperator' =>
+ AmazonDynamoDB::CONDITION_GREATER_THAN_OR_EQUAL,
+ 'AttributeValueList' => array(
+ array(AmazonDynamoDB::TYPE_NUMBER => '16')
+ )
+ ),
+ )
+ );
+
+ // Add the exclusive start key parameter if needed
+ if ($start_key)
+ {
+ $params['ExclusiveStartKey'] = array(
+ 'HashKeyElement' => array(
+ AmazonDynamoDB::TYPE_STRING => $start_key
+ )
+ );
+
+ $start_key = null;
+ }
+
+ // Perform the Scan operation and get the response
+ $response = $dynamo_db->scan($params);
+
+ // If the response succeeded, get the results
+ if ($response->isOK())
+ {
+ foreach ($response->body->Items as $item)
+ {
+ $people[] = (string) $item->name->{AmazonDynamoDB::TYPE_STRING};
+ }
+
+ // Get the last evaluated key if it is provided
+ if ($response->body->LastEvaluatedKey)
+ {
+ $start_key = (string) $response->body
+ ->LastEvaluatedKey
+ ->HashKeyElement
+ ->{AmazonDynamoDB::TYPE_STRING};
+ }
+ }
+ else
+ {
+ // Throw an exception if the response was not OK (200-level)
+ throw new DynamoDB_Exception('DynamoDB Scan operation failed.');
+ }
+ }
+ while ($start_key);
+
+ print_r($people);
+
+From Version 2 of the SDK
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: php
+
+ get('dynamodb');
+
+ // Create a ScanIterator and setup the parameters for the DynamoDB Scan operation
+ $scan = $dynamodb->getIterator('Scan', array(
+ 'TableName' => 'people',
+ 'AttributesToGet' => array('id', 'age', 'name'),
+ 'ScanFilter' => array(
+ 'age' => array(
+ 'ComparisonOperator' => ComparisonOperator::GE,
+ 'AttributeValueList' => array(
+ array(Type::NUMBER => '16')
+ )
+ ),
+ )
+ ));
+
+ // Perform as many Scan operations as needed to acquire all the names of people
+ // that are 16 or older
+ $people = array();
+ foreach ($scan as $item) {
+ $people[] = $item['name'][Type::STRING];
+ }
+
+ print_r($people);
diff --git a/vendor/aws/aws-sdk-php/docs/performance.rst b/vendor/aws/aws-sdk-php/docs/performance.rst
new file mode 100644
index 0000000..353678c
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/performance.rst
@@ -0,0 +1,287 @@
+=================
+Performance Guide
+=================
+
+The AWS SDK for PHP is able to send HTTP requests to various web services with minimal overhead. This document serves
+as a guide that will help you to achieve optimal performance with the SDK.
+
+.. contents::
+ :depth: 1
+ :local:
+ :class: inline-toc
+
+Upgrade PHP
+-----------
+
+Using an up-to-date version of PHP will generally improve the performance of your PHP applications. Did you know that
+PHP 5.4 is `20-40% faster `_ than PHP 5.3?
+`Upgrading to PHP 5.4 `_ or greater will provide better performance and
+lower memory usage. If you cannot upgrade from PHP 5.3 to PHP 5.4 or PHP 5.5, upgrading to PHP 5.3.18 or greater will
+improve performance over older versions of PHP 5.3.
+
+You can install PHP 5.4 on an Amazon Linux AMI using the following command.
+
+.. code-block:: bash
+
+ yum install php54
+
+Use PHP 5.5 or an opcode cache like APC
+---------------------------------------
+
+To improve the overall performance of your PHP environment, it is highly recommended that you use an opcode cache
+such as the OPCache built into PHP 5.5, APC, XCache, or WinCache. By default, PHP must load a file from disk, parse
+the PHP code into opcodes, and finally execute the opcodes. Installing an opcode cache allows the parsed opcodes to
+be cached in memory so that you do not need to parse the script on every web server request, and in ideal
+circumstances, these opcodes can be served directly from memory.
+
+We have taken great care to ensure that the SDK will perform well in an environment that utilizes an opcode cache.
+
+.. note::
+
+ PHP 5.5 comes with an opcode cache that is installed and enabled by default:
+ http://php.net/manual/en/book.opcache.php
+
+ If you are using PHP 5.5, then you may skip the remainder of this section.
+
+APC
+~~~
+
+If you are not able to run PHP 5.5, then we recommend using APC as an opcode cache.
+
+Installing on Amazon Linux
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+When using Amazon Linux, you can install APC using one of the following commands depending on if you are using PHP 5.3
+or PHP 5.4.
+
+.. code-block:: bash
+
+ # For PHP 5.4
+ yum install php54-pecl-apc
+
+ # For PHP 5.3
+ yum install php-pecl-apc
+
+Modifying APC settings
+^^^^^^^^^^^^^^^^^^^^^^
+
+APC configuration settings can be set and configured in the ``apc.ini`` file of most systems. You can find more
+information about configuring APC in the PHP.net `APC documentation `_.
+
+The APC configuration file is located at ``/etc/php.d/apc.ini`` on Amazon Linux.
+
+.. code-block:: bash
+
+ # You can only modify the file as sudo
+ sudo vim /etc/php.d/apc.ini
+
+apc.shm_size=128M
+^^^^^^^^^^^^^^^^^
+
+It is recommended that you set the `apc.shm_size `_
+setting to be 128M or higher. You should investigate what the right value will be for your application. The ideal
+value will depend on how many files your application includes, what other frameworks are used by your application, and
+if you are caching data in the APC user cache.
+
+You can run the following command on Amazon Linux to set apc.shm_size to 128M::
+
+ sed -i "s/apc.shm_size=.*/apc.shm_size=128M/g" /etc/php.d/apc.ini
+
+apc.stat=0
+^^^^^^^^^^
+
+The SDK adheres to PSR-0 and relies heavily on class autoloading. When ``apc.stat=1``, APC will perform a stat on
+each cached entry to ensure that the file has not been updated since it was cache in APC. This incurs a system call for
+every autoloaded class required by a PHP script (you can see this for yourself by running ``strace`` on your
+application).
+
+You can tell APC to not stat each cached file by setting ``apc.stat=0`` in you apc.ini file. This change will generally
+improve the overall performance of APC, but it will require you to explicitly clear the APC cache when a cached file
+should be updated. This can be accomplished with Apache by issuing a hard or graceful restart. This restart step could
+be added as part of the deployment process of your application.
+
+You can run the following command on Amazon Linux to set apc.stat to 0::
+
+ sed -i "s/apc.stat=1/apc.stat=0/g" /etc/php.d/apc.ini
+
+.. admonition:: From the `PHP documentation `_
+
+ This defaults to on, forcing APC to stat (check) the script on each request to determine if it has been modified. If
+ it has been modified it will recompile and cache the new version. If this setting is off, APC will not check, which
+ usually means that to force APC to recheck files, the web server will have to be restarted or the cache will have to
+ be manually cleared. Note that FastCGI web server configurations may not clear the cache on restart. On a production
+ server where the script files rarely change, a significant performance boost can be achieved by disabled stats.
+
+ For included/required files this option applies as well, but note that for relative path includes (any path that
+ doesn't start with / on Unix) APC has to check in order to uniquely identify the file. If you use absolute path
+ includes APC can skip the stat and use that absolute path as the unique identifier for the file.
+
+Use Composer with a classmap autoloader
+---------------------------------------
+
+Using `Composer `_ is the recommended way to install the AWS SDK for PHP. Composer is a
+dependency manager for PHP that can be used to pull in all of the dependencies of the SDK and generate an autoloader.
+
+Autoloaders are used to lazily load classes as they are required by a PHP script. Composer will generate an autoloader
+that is able to autoload the PHP scripts of your application and all of the PHP scripts of the vendors required by your
+application (i.e. the AWS SDK for PHP). When running in production, it is highly recommended that you use a classmap
+autoloader to improve the autoloader's speed. You can generate a classmap autoloader by passing the ``-o`` or
+``--optimize-autoloader`` option to Composer's `install command `_::
+
+ php composer.phar install --optimize-autoloader
+
+Please consult the :doc:`installation` guide for more information on how to install the SDK using Composer.
+
+Uninstall Xdebug
+----------------
+
+`Xdebug `_ is an amazing tool that can be used to identify performance bottlenecks. However, if
+performance is critical to your application, do not install the Xdebug extension on your production environment. Simply
+loading the extension will greatly slow down the SDK.
+
+When running on Amazon Linux, Xdebug can be removed with the following command:
+
+.. code-block:: bash
+
+ # PHP 5.4
+ yum remove php54-pecl-xdebug
+
+ # PHP 5.3
+ yum remove php-pecl-xdebug
+
+Install PECL uri_template
+-------------------------
+
+The SDK utilizes URI templates to power each operation. In order to be compatible out of the box with the majority
+of PHP environments, the default URI template expansion implementation is written in PHP.
+`PECL URI_Template `_ is a URI template extension for PHP written in C. This C
+implementation is about 3 times faster than the default PHP implementation for expanding URI templates. Your
+application will automatically begin utilizing the PECL uri_template extension after it is installed.
+
+.. code-block:: bash
+
+ pecl install uri_template-alpha
+
+Turn off parameter validation
+-----------------------------
+
+The SDK utilizes service descriptions to tell the client how to serialize an HTTP request and parse an HTTP response
+into a Model object. Along with serialization information, service descriptions are used to validate operation inputs
+client-side before sending a request. Disabling parameter validation is a micro-optimization, but this setting can
+typically be disabled in production by setting the ``validation`` option in a client factory method to ``false``.
+
+.. code-block:: php
+
+ $client = Aws\DynamoDb\DynamoDbClient::factory(array(
+ 'region' => 'us-west-2',
+ 'validation' => false
+ ));
+
+Cache instance profile credentials
+----------------------------------
+
+When you do not provide credentials to the SDK and do not have credentials defined in your environment variables, the
+SDK will attempt to utilize IAM instance profile credentials by contacting the Amazon EC2 instance metadata service
+(IMDS). Contacting the IMDS requires an HTTP request to retrieve credentials from the IMDS.
+
+You can cache these instance profile credentials in memory until they expire and avoid the cost of sending an HTTP
+request to the IMDS each time the SDK is utilized. Set the ``credentials.cache`` option to ``true`` to attempt to
+utilize the `Doctrine Cache `_ PHP library to cache credentials with APC.
+
+.. code-block:: php
+
+ $client = Aws\DynamoDb\DynamoDbClient::factory(array(
+ 'region' => 'us-west-2',
+ 'credentials.cache' => true
+ ));
+
+.. note::
+
+ You will need to install Doctrine Cache in order for the SDK to cache credentials when setting
+ ``credentials.cache`` to ``true``. You can add doctrine/cache to your composer.json dependencies by adding to your
+ project's ``required`` section::
+
+ {
+ "required": {
+ "aws/sdk": "2.*",
+ "doctrine/cache": "1.*"
+ }
+ }
+
+Check if you are being throttled
+--------------------------------
+
+You can check to see if you are being throttled by enabling the exponential backoff logger option. You can set the
+``client.backoff.logger`` option to ``debug`` when in development, but we recommend that you provide a
+``Guzzle\Log\LogAdapterInterface`` object when running in production.
+
+.. code-block:: php
+
+ $client = Aws\DynamoDb\DynamoDbClient::factory(array(
+ 'region' => 'us-west-2',
+ 'client.backoff.logger' => 'debug'
+ ));
+
+When using Amazon DynamoDB, you can monitor your tables for throttling using
+`Amazon CloudWatch `_.
+
+Preload frequently included files
+---------------------------------
+
+The AWS SDK for PHP adheres to PSR-0 and heavily utilizes class autoloading. Each class is in a separate file and
+are included lazily as they are required. Enabling an opcode cache like APC, setting ``apc.stat=0``, and utilizing an
+optimized Composer autoloader will help to mitigate the performance cost of autoloading the classes needed to utilize
+the SDK. In situations like hosting a webpage where you are loading the same classes over and over, you can shave off a
+bit more time by compiling all of the autoloaded classes into a single file thereby completely eliminating the cost of
+autoloading. This technique can not only speed up the use of the SDK for specific use cases (e.g. using the
+Amazon DynamoDB session handler), but can also speed up other aspects of your application. Even with ``apc.stat=0``,
+preloading classes that you know will be used in your application will be slightly faster than relying on autoloading.
+
+You can easily generate a compiled autoloader file using the
+`ClassPreloader `_ project. View the project's README for information on
+creating a "preloader" for use with the AWS SDK for PHP.
+
+Profile your code to find performance bottlenecks
+-------------------------------------------------
+
+You will need to profile your application to determine the bottlenecks. This can be done using
+`Xdebug `_, `XHProf `_,
+`strace `_, and various other tools. There are many resources available on the
+internet to help you track down performance problems with your application. Here are a few that we have found useful:
+
+* http://talks.php.net/show/devconf/0
+* http://talks.php.net/show/perf_tunning/16
+
+Comparing SDK1 and SDK2
+-----------------------
+
+Software performance is very subjective and depends heavily on factors outside of the control of the SDK. The
+AWS SDK for PHP is tuned to cover the broadest set of performance sensitive applications using AWS. While there may
+be a few isolated cases where V1 of the the SDK is as fast or faster than V2, that is not generally true and comes
+with the loss of extensibility, maintainability, persistent HTTP connections, response parsing, PSR compliance, etc.
+
+Depending on your use case, you will find that a properly configured environment running the AWS SDK for PHP is
+generally just as fast as SDK1 for sending a single request and more than 350% faster than SDK1 for sending many
+requests.
+
+Comparing batch requests
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+A common misconception when comparing the performance of SDK1 and SDK2 is that SDK1 is faster than SDK2 when sending
+requests using the "batch()" API.
+
+SDK1 is generally *not* faster at sending requests in parallel than SDK2. There may be some cases where SDK1 will appear
+to more quickly complete the process of sending multiple requests in parallel, but SDK1 does not retry throttled
+requests when using the ``batch()`` API. In SDK2, throttled requests are automatically retried in parallel using
+truncated exponential backoff. Automatically retrying failed requests will help to ensure that your application is
+successfully completing the requests that you think it is.
+
+You can always disable retries if your use case does not benefit from retrying failed requests. To disable retries,
+set 'client.backoff' to ``false`` when creating a client.
+
+.. code-block:: php
+
+ $client = Aws\DynamoDb\DynamoDbClient::factory(array(
+ 'region' => 'us-west-2',
+ 'client.backoff' => false
+ ));
diff --git a/vendor/aws/aws-sdk-php/docs/quick-start.rst b/vendor/aws/aws-sdk-php/docs/quick-start.rst
new file mode 100644
index 0000000..2c84b93
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/quick-start.rst
@@ -0,0 +1,197 @@
+=====================
+Getting Started Guide
+=====================
+
+This "Getting Started Guide" focuses on basic usage of the **AWS SDK for PHP**. After reading through this material, you
+should be familiar with the SDK and be able to start using the SDK in your application. This guide assumes that you have
+already :doc:`downloaded and installed the SDK ` and retrieved your `AWS access keys
+`_.
+
+Including the SDK
+-----------------
+
+No matter which technique you have used to to install the SDK, the SDK can be included into your project or script with
+just a single include (or require) statement. Please refer to the following table for the PHP code that best fits your
+installation technique. Please replace any instances of ``/path/to/`` with the actual path on your system.
+
+========================== =============================================================================================
+Installation Technique Include Statement
+========================== =============================================================================================
+Using Composer ``require '/path/to/vendor/autoload.php';``
+-------------------------- ---------------------------------------------------------------------------------------------
+Using the Phar ``require '/path/to/aws.phar';``
+-------------------------- ---------------------------------------------------------------------------------------------
+Using the Zip ``require '/path/to/aws-autoloader.php';``
+-------------------------- ---------------------------------------------------------------------------------------------
+Using PEAR ``require 'AWSSDKforPHP/aws.phar';``
+========================== =============================================================================================
+
+For the remainder of this guide, we will show examples that use the Composer installation method. If you are using a
+different installation method, then you can refer to this section and substitute in the proper code.
+
+Creating a client object
+------------------------
+
+To use the SDK, you first you need to instantiate a **client** object for the service you are using. We'll use the
+Amazon Simple Storage Service (Amazon S3) client as an example. You can instantiate a client using two different
+techniques.
+
+.. _client_factory_method:
+
+Factory method
+~~~~~~~~~~~~~~
+
+The easiest way to get up and running quickly is to use the web service client's ``factory()`` method and provide your
+**credential profile** (via the ``profile`` option), which identifies the set of credentials you want to use from your
+``~/.aws/credentials`` file (see :ref:`credential_profiles`).
+
+.. code-block:: php
+
+ 'my_profile',
+ ));
+
+You can also choose to forgo specifying credentials if you are relying on **instance profile credentials**, provided via
+`AWS Identity and Access Management (AWS IAM) roles for EC2 instances `_,
+or **environment credentials** sourced from the ``AWS_ACCESS_KEY_ID`` and ``AWS_SECRET_ACCESS_KEY`` environment
+variables. For more information about credentials, see :doc:`credentials`.
+
+.. note::
+
+ Instance profile credentials and other temporary credentials generated by the AWS Security Token Service (AWS STS)
+ are not supported by every service. Please check if the service you are using supports temporary credentials by
+ reading `AWS Services that Support AWS STS `_.
+
+Depending on the service, you may also need to provide a **region** value to the ``factory()`` method. The region value
+is used by the SDK to determine the `regional endpoint `_ to
+use to communicate with the service. Amazon S3 does not require you to provide a region, but other services like Amazon
+Elastic Compute Cloud (Amazon EC2) do. You can specify a region and other configuration settings along with your
+credentials in the array argument that you provide.
+
+.. code-block:: php
+
+ $ec2Client = \Aws\Ec2\Ec2Client::factory(array(
+ 'profile' => 'my_profile',
+ 'region' => 'us-east-1',
+ ));
+
+To know if the service client you are using requires a region and to find out which regions are supported by the client,
+please see the appropriate :ref:`service-specific guide `.
+
+Service builder
+~~~~~~~~~~~~~~~
+
+Another way to instantiate a service client is using the ``Aws\Common\Aws`` object (a.k.a the **service builder**).
+The ``Aws`` object is essentially a `service locator `_, and
+allows you to specify credentials and configuration settings such that they can be shared across all client instances.
+Also, every time you fetch a client object from the ``Aws`` object, it will be exactly the same instance.
+
+.. code-block:: php
+
+ use Aws\Common\Aws;
+
+ // Create a service locator using a configuration file
+ $aws = Aws::factory(array(
+ 'profile' => 'my_profile',
+ 'region' => 'us-east-1',
+ ));
+
+ // Get client instances from the service locator by name
+ $s3Client = $aws->get('s3');
+ $ec2Client = $aws->get('ec2');
+
+ // The service locator always returns the same instance
+ $anotherS3Client = $aws->get('s3');
+ assert('$s3Client === $anotherS3Client');
+
+You can also declare your credentials and settings in a **configuration file**, and provide the path to that file (in
+either php or json format) when you instantiate the ``Aws`` object.
+
+.. code-block:: php
+
+ // Create a `Aws` object using a configuration file
+ $aws = Aws::factory('/path/to/config.php');
+
+ // Get the client from the service locator by namespace
+ $s3Client = $aws->get('s3');
+
+A simple configuration file should look something like this:
+
+.. code-block:: php
+
+ array('_aws'),
+ 'services' => array(
+ 'default_settings' => array(
+ 'params' => array(
+ 'key' => 'YOUR_AWS_ACCESS_KEY_ID',
+ 'secret' => 'YOUR_AWS_SECRET_ACCESS_KEY',
+ // OR: 'profile' => 'my_profile',
+ 'region' => 'us-west-2'
+ )
+ )
+ )
+ );
+
+For more information about configuration files, please see :doc:`configuration`.
+
+Performing service operations
+-----------------------------
+
+.. include:: _snippets/performing-operations.txt
+
+To learn about performing operations in more detail, including using command objects, see :doc:`feature-commands`.
+
+Working with modeled responses
+------------------------------
+
+.. include:: _snippets/models-intro.txt
+
+To learn more about how to work with modeled responses, read the detailed guide to :doc:`feature-models`.
+
+Detecting and handling errors
+-----------------------------
+
+When you preform an operation, and it succeeds, it will return a modeled response. If there was an error with the
+request, then an exception is thrown. For this reason, you should use ``try``/``catch`` blocks around your operations if
+you need to handle errors in your code. The SDK throws service-specific exceptions when a server-side error occurs.
+
+In the following example, the ``Aws\S3\S3Client`` is used. If there is an error, the exception thrown will be of the
+type: ``Aws\S3\Exception\S3Exception``.
+
+.. code-block:: php
+
+ try {
+ $s3Client->createBucket(array(
+ 'Bucket' => 'my-bucket'
+ ));
+ } catch (\Aws\S3\Exception\S3Exception $e) {
+ // The bucket couldn't be created
+ echo $e->getMessage();
+ }
+
+Exceptions thrown by the SDK like this all extend the ``ServiceResponseException`` class (`see the API docs
+`_), which has
+some custom methods that might help you discover what went wrong.
+
+Waiters
+-------
+
+.. include:: _snippets/waiters-intro.txt
+
+To learn more about how to use and configure waiters, please read the detailed guide to :doc:`feature-waiters`.
+
+Iterators
+---------
+
+.. include:: _snippets/iterators-intro.txt
+
+To learn more about how to use and configure iterators, please read the detailed guide to :doc:`feature-iterators`.
diff --git a/vendor/aws/aws-sdk-php/docs/requirements.rst b/vendor/aws/aws-sdk-php/docs/requirements.rst
new file mode 100644
index 0000000..d4258ae
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/requirements.rst
@@ -0,0 +1,39 @@
+============
+Requirements
+============
+
+Aside from a baseline understanding of object-oriented programming in PHP (including PHP 5.3 namespaces), there are a
+few minimum system requirements to start using the AWS SDK for PHP. The extensions listed are common and are
+installed with PHP 5.3 by default in most environments.
+
+Minimum requirements
+--------------------
+
+* PHP 5.3.3+ compiled with the cURL extension
+* A recent version of cURL 7.16.2+ compiled with OpenSSL and zlib
+
+.. note::
+
+ To work with Amazon CloudFront private distributions, you must have the OpenSSL PHP extension to sign private
+ CloudFront URLs.
+
+.. _optimal-settings:
+
+Optimal settings
+----------------
+
+Please consult the :doc:`performance` for a list of recommendations and optimal settings that can be made to
+ensure that you are using the SDK as efficiently as possible.
+
+Compatibility test
+------------------
+
+Run the `compatibility-test.php` file in the SDK to quickly check if your system is capable of running the SDK. In
+addition to meeting the minimum system requirements of the SDK, the compatibility test checks for optional settings and
+makes recommendations that can help you to improve the performance of the SDK. The compatibility test can output text
+for the command line or a web browser. When running in a browser, successful checks appear in green, warnings in
+purple, and failures in red. When running from the CLI, the result of a check will appear on each line.
+
+When reporting an issue with the SDK, it is often helpful to share information about your system. Supplying the output
+of the compatibility test in forum posts or GitHub issues can help to streamline the process of identifying the root
+cause of an issue.
diff --git a/vendor/aws/aws-sdk-php/docs/requirements.txt b/vendor/aws/aws-sdk-php/docs/requirements.txt
new file mode 100644
index 0000000..bd758a7
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/requirements.txt
@@ -0,0 +1,3 @@
+rst2pdf
+Sphinx>=1.2b1
+guzzle_sphinx_theme>=0.3.0
diff --git a/vendor/aws/aws-sdk-php/docs/service-autoscaling.rst b/vendor/aws/aws-sdk-php/docs/service-autoscaling.rst
new file mode 100644
index 0000000..ceed814
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/service-autoscaling.rst
@@ -0,0 +1,5 @@
+.. service:: AutoScaling
+
+.. include:: _snippets/incomplete.txt
+
+.. apiref:: AutoScaling
diff --git a/vendor/aws/aws-sdk-php/docs/service-cloudformation.rst b/vendor/aws/aws-sdk-php/docs/service-cloudformation.rst
new file mode 100644
index 0000000..bc932f1
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/service-cloudformation.rst
@@ -0,0 +1,5 @@
+.. service:: CloudFormation
+
+.. include:: _snippets/incomplete.txt
+
+.. apiref:: CloudFormation
diff --git a/vendor/aws/aws-sdk-php/docs/service-cloudfront-20120505.rst b/vendor/aws/aws-sdk-php/docs/service-cloudfront-20120505.rst
new file mode 100644
index 0000000..1feb701
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/service-cloudfront-20120505.rst
@@ -0,0 +1,134 @@
+.. service:: CloudFront 2012-05-05
+
+Signing CloudFront URLs for Private Distributions
+-------------------------------------------------
+
+Signed URLs allow you to provide users access to your private content. A signed URL includes additional information
+(e.g., expiration time) that gives you more control over access to your content. This additional information appears in
+a policy statement, which is based on either a canned policy or a custom policy. For information about how to set up
+private distributions and why you need to sign URLs, please read the `Serving Private Content through CloudFront section
+`_ of the CloudFront Developer
+Guide.
+
+.. note:
+
+ You must have the OpenSSL extension installed in you PHP environment in order to sign CloudFront URLs.
+
+You can sign a URL using the CloudFront client in the SDK. First you must make sure to provide your CloudFront
+Private Key and Key Pair ID to the CloudFront client.
+
+.. code-block:: php
+
+ '/path/to/your/cloudfront-private-key.pem',
+ 'key_pair_id' => '',
+ ));
+
+You can alternatively specify the Private Key and Key Pair ID in your AWS config file and use the service builder to
+instantiate the CloudFront client. The following is an example config file that specifies the CloudFront key information.
+
+.. code-block:: php
+
+ array('_aws'),
+ 'services' => array(
+ 'default_settings' => array(
+ 'params' => array(
+ 'key' => '',
+ 'secret' => '',
+ 'region' => 'us-west-2'
+ )
+ ),
+ 'cloudfront' => array(
+ 'extends' => 'cloudfront',
+ 'params' => array(
+ 'private_key' => '/path/to/your/cloudfront-private-key.pem',
+ 'key_pair_id' => ''
+ )
+ )
+ )
+ );
+
+You can sign a CloudFront URL for a video resource using either a canned or custom policy.
+
+.. code-block:: php
+
+ // Setup parameter values for the resource
+ $streamHostUrl = 'rtmp://example-distribution.cloudfront.net';
+ $resourceKey = 'videos/example.mp4';
+ $expires = time() + 300;
+
+ // Create a signed URL for the resource using the canned policy
+ $signedUrlCannedPolicy = $cloudFront->getSignedUrl(array(
+ 'url' => $streamHostUrl . '/' . $resourceKey,
+ 'expires' => $expires,
+ ));
+
+For versions of the SDK later than 2.3.1, instead of providing your private key information when you instantiate the
+client, you can provide it at the time when you sign the URL.
+
+.. code-block:: php
+
+ $signedUrlCannedPolicy = $cloudFront->getSignedUrl(array(
+ 'url' => $streamHostUrl . '/' . $resourceKey,
+ 'expires' => $expires,
+ 'private_key' => '/path/to/your/cloudfront-private-key.pem',
+ 'key_pair_id' => ''
+ ));
+
+To use a custom policy, provide the ``policy`` key instead of ``expires``.
+
+.. code-block:: php
+
+ $customPolicy = <<getSignedUrl(array(
+ 'url' => $streamHostUrl . '/' . $resourceKey,
+ 'policy' => $customPolicy,
+ ));
+
+The form of the signed URL is actually different depending on if the URL you are signing is using the "http" or "rtmp"
+scheme. In the case of "http", the full, absolute URL is returned. For "rtmp", only the relative URL is returned for
+your convenience, because some players require the host and path to be provided as separate parameters.
+
+The following is an example of how you could use the signed URL to construct a web page displaying a video using
+`JWPlayer `_. The same type of technique would apply to other players like
+`FlowPlayer `_, but will require different client-side code.
+
+.. code-block:: html
+
+
+
+ Amazon CloudFront Streaming Example
+
+
+
+
The canned policy video will be here.
+
+
+
+
+.. include:: _snippets/incomplete.txt
+
+.. apiref:: CloudFront 2012-05-05
diff --git a/vendor/aws/aws-sdk-php/docs/service-cloudfront.rst b/vendor/aws/aws-sdk-php/docs/service-cloudfront.rst
new file mode 100644
index 0000000..4190b26
--- /dev/null
+++ b/vendor/aws/aws-sdk-php/docs/service-cloudfront.rst
@@ -0,0 +1,134 @@
+.. service:: CloudFront
+
+Signing CloudFront URLs for Private Distributions
+-------------------------------------------------
+
+Signed URLs allow you to provide users access to your private content. A signed URL includes additional information
+(e.g., expiration time) that gives you more control over access to your content. This additional information appears in
+a policy statement, which is based on either a canned policy or a custom policy. For information about how to set up
+private distributions and why you need to sign URLs, please read the `Serving Private Content through CloudFront section
+`_ of the CloudFront Developer
+Guide.
+
+.. note:
+
+ You must have the OpenSSL extension installed in you PHP environment in order to sign CloudFront URLs.
+
+You can sign a URL using the CloudFront client in the SDK. First you must make sure to provide your CloudFront
+Private Key and Key Pair ID to the CloudFront client.
+
+.. code-block:: php
+
+ '/path/to/your/cloudfront-private-key.pem',
+ 'key_pair_id' => '',
+ ));
+
+You can alternatively specify the Private Key and Key Pair ID in your AWS config file and use the service builder to
+instantiate the CloudFront client. The following is an example config file that specifies the CloudFront key information.
+
+.. code-block:: php
+
+ array('_aws'),
+ 'services' => array(
+ 'default_settings' => array(
+ 'params' => array(
+ 'key' => '',
+ 'secret' => '',
+ 'region' => 'us-west-2'
+ )
+ ),
+ 'cloudfront' => array(
+ 'extends' => 'cloudfront',
+ 'params' => array(
+ 'private_key' => '/path/to/your/cloudfront-private-key.pem',
+ 'key_pair_id' => ''
+ )
+ )
+ )
+ );
+
+You can sign a CloudFront URL for a video resource using either a canned or custom policy.
+
+.. code-block:: php
+
+ // Setup parameter values for the resource
+ $streamHostUrl = 'rtmp://example-distribution.cloudfront.net';
+ $resourceKey = 'videos/example.mp4';
+ $expires = time() + 300;
+
+ // Create a signed URL for the resource using the canned policy
+ $signedUrlCannedPolicy = $cloudFront->getSignedUrl(array(
+ 'url' => $streamHostUrl . '/' . $resourceKey,
+ 'expires' => $expires,
+ ));
+
+For versions of the SDK later than 2.3.1, instead of providing your private key information when you instantiate the
+client, you can provide it at the time when you sign the URL.
+
+.. code-block:: php
+
+ $signedUrlCannedPolicy = $cloudFront->getSignedUrl(array(
+ 'url' => $streamHostUrl . '/' . $resourceKey,
+ 'expires' => $expires,
+ 'private_key' => '/path/to/your/cloudfront-private-key.pem',
+ 'key_pair_id' => ''
+ ));
+
+To use a custom policy, provide the ``policy`` key instead of ``expires``.
+
+.. code-block:: php
+
+ $customPolicy = <<getSignedUrl(array(
+ 'url' => $streamHostUrl . '/' . $resourceKey,
+ 'policy' => $customPolicy,
+ ));
+
+The form of the signed URL is actually different depending on if the URL you are signing is using the "http" or "rtmp"
+scheme. In the case of "http", the full, absolute URL is returned. For "rtmp", only the relative URL is returned for
+your convenience, because some players require the host and path to be provided as separate parameters.
+
+The following is an example of how you could use the signed URL to construct a web page displaying a video using
+`JWPlayer `_. The same type of technique would apply to other players like
+`FlowPlayer `_, but will require different client-side code.
+
+.. code-block:: html
+
+
+
+ Amazon CloudFront Streaming Example
+
+
+
+