The idea is to leverage LLMs and serverless architecture to handle the generation of word search puzzle from start to end.
- Word Search words generation using ollama
- Word Search words generation using openai
- Puzzle generation
- Puzzle for end users (HTML)
- Puzzle for end users (PDF)
- Custom Puzzle Dimension
- Words Validation
- Safe Words
- Ensure you have AWS CLI, AWS CDK and Docker installed
- Run
cdk deploy
to deploy the stack - Done!
Prerequisite: You will to the endpoint to your ollama server. Read more on ollama here
$ python3 scripts/ollama-word-generate.py "cartoon characters" "llama2:13b" "http://localhost:11434" > /tmp/test.json
$ cat /tmp/test.json
{
"title": "Solar System",
"0": "earth",
"1": "mars",
"2": "jupiter",
"3": "saturn",
"4": "uranus",
"5": "neptune",
"6": "sun",
"7": "mercury",
"8": "moon",
"9": "pluto"
}
Note: You can test the application by uploading a list of words to the deployed S3 (/words) and see the generated puzzle in your S3 (/puzzle)
$ aws s3 cp /tmp/test.json s3://<bucket-arn>/words/test.json
npm run build
compile typescript to jsnpm run watch
watch for changes and compilenpm run test
perform the jest unit testscdk deploy
deploy this stack to your default AWS account/regioncdk diff
compare deployed stack with current statecdk synth
emits the synthesized CloudFormation template