AWS Lambda is a serverless computing platform. It allows you to run code without managing containers and other infrastructure objects.
The deploying code must be designed as event handler (i.e. Lambda function). After the deployment you can send events to AWS and thus trigger your handler in a proper way.
See AWS Lambda welcome guide for more information.
There are two general ways of interacting with AWS: via AWS Console or via AWS Command Line Interface.
In the current example we are going to run simple Dasha application using AWS Lambda.
The current example is made in the way showed in the basic AWS Lambda blank nodejs example. If you are not familiar with AWS Lambda, this example is a good way to dive into.
The AWS application is configured by template.yml
file.
It describes all application components and combines them all together.
To learn more about AWS templates, see AWS SAM doc.
In this example the whole application is configured by AWS::Serverless::Function
(see AWS doc to learn its properties).
Please, pay attention to AWS::Serverless::Function
properties:
Environment
- sets the environment variables that configure Dasha application. Here you have to provide yourDasha apikey
, a server which will be used to run your app (for nowen
andru
are availabe), and application concurrencyTimeout
- timeout of application running time. The maximum possible value is900
(sec). So, if you want to perform many calls, you have to take this limitation in mind.
The control over AWS is taken bia CLI commands wrapped in .sh
scripts:
1-create-bucket.sh
- creates AWS S3 bucket2-build-layer.sh
- installsNodejs
application dependecies and stores them into a folder. These dependencies will be used is deployment3-deploy.sh
- deploys AWS application using application configtemplate.yml
4-prepare-event.sh
- generates fileevent.json
that will be used to invoke Lambda. The event structure is described inevent-template.json
"appZipBase64"
- zipped and encoded application files"conversationInputs"
- conversation inputs taken fromevent-conversations.json
5-invoke.sh
- uses generated event (event.json
) to send it to the server
- Sign up in DashaAI
- Get
Dasha apikey
(e.g. in Dasha CLI runningdasha account info
) - Install AWS requirements: instruction
- Run script
1-create-bucket.sh
to create AWS bucket.
Example output:
$ ./1-create-bucket.sh
make_bucket: lambda-artifacts-2233cac72b64eb92
- Run script
2-build-layer.sh
to install dependencies and prepare them for deploying.
Example output:
$ ./2-build-layer.sh
added 249 packages, and audited 250 packages in 4s
25 packages are looking for funding
run `npm fund` for details
found 0 vulnerabilities
Run script 3-deploy.sh
to deploy application configured by template.yml
.
Example output:
$ ./3-deploy.sh
Successfully packaged artifacts and wrote output template to file out.yml.
Execute the following command to deploy the packaged template
aws cloudformation deploy --template-file C:\Users\vkuja\my\documents\dasha\rep\dasha-doc-examples\Integrations\AWS-Lambda\out.yml --stack-name <YOUR STACK NAME>
Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - dasha-aws-lambda-demo
Now everything is prepared for launching the application.
Our AWS application expects the events to have the following structure (see event-template.json
):
{
"body": {
"appZipBase64": <dasha_application_files_zipped_and_encoded_with_base64>,
"conversationInputs": <array_of_conversation_inputs>
}
}
So, to test this application you have to specify conversation inputs in event-conversations.json
file.
Conversation inputs are expected to be an array. Each element of the array is the conversation input that will be executed one by one.
The current dasha application needs the single parameter to start the conversation - phone number to call.
After setting up the conversation inputs, run script 4-prepare-event.sh
to prepare file with event object (event.json
).
This script will zip and encode your app, parse event-conversations.json
and combine them all together.
After that run 5-invoke.sh
.
The first run of the app may take a few minutes: it has to be deployed on our server, the NLU dataset has to be trained, etc.
The further runs have to be quicker since dasha application is already cached and our server need only to get it and perform the conversation.
To see what is going on in runtime you can see AWS logs of your app:
- open the AWS cloudwatch: https://console.aws.amazon.com/cloudwatch/
- Navigate to
Logs
->Log groups
- Choose your app. Here you can see Log groups that correspond to application invocations.
- Choose the last log group (or what ever you want:) )
- Explore logs
After the conversation is finished, its result (which contains output
data, transcription
, recordingUrl
and time info) is stored in out.json
and logged in your console.