The module creates Kibana for the Elasticsearch cluster.
Elasticsearch cluster is a natural pre-requisite of Kibana. However, the Elasticsearch cluster itself requires certain AWS resources. Check the elasticsearch module documentation for those.
The Elasticsearch cluster requires two terraform apply
-s. One with bootstrap_mode = true
and another with
bootstrap_mode = false
. Use following Terraform snippet to provision the cluster.
module "elasticsearch" {
source = "infrahouse/elasticsearch/aws"
version = "~> 0.6"
providers = {
aws = aws
aws.dns = aws
}
cluster_name = "some-cluster-name"
cluster_master_count = 3
cluster_data_count = 1
environment = "development"
internet_gateway_id = module.service-network.internet_gateway_id
key_pair_name = aws_key_pair.test.key_name
subnet_ids = module.service-network.subnet_private_ids
zone_id = var.zone_id
bootstrap_mode = var.bootstrap_mode
}
One the Elasticsearch cluster is ready (and by "ready" I mean master and data nodes are up & running), you can provision Kibana.
module "kibana" {
source = "infrahouse/kibana/aws"
version = "~> 1.0"
providers = {
aws = aws
aws.dns = aws
}
asg_subnets = module.service-network.subnet_private_ids
elasticsearch_cluster_name = "some-cluster-name"
elasticsearch_url = var.elasticsearch_url
internet_gateway_id = module.service-network.internet_gateway_id
kibana_system_password = module.elasticsearch.kibana_system_password
load_balancer_subnets = module.service-network.subnet_public_ids
ssh_key_name = aws_key_pair.test.key_name
zone_id = var.zone_id
}
Note the inputs:
asg_subnets
- these are subnet ids where autoscaling group with EC2 instance for Kibana ECS will be created. They need to be private subnets - you don't want to expose them to Internet.load_balancer_subnets
- these are subnet ids where the load balancer will be created. Can be public, but I recommend to deploy the load balancer in the private subnets and configure VPN access for users that need Kibana.
The kibana module will output URL where Kibana UI is available. User elastic username and its password to access Kibana first time.
Name | Version |
---|---|
terraform | ~> 1.5 |
aws | ~> 5.11 |
Name | Version |
---|---|
aws | ~> 5.11 |
random | n/a |
Name | Source | Version |
---|---|---|
kibana | registry.infrahouse.com/infrahouse/ecs/aws | = 3.5.1 |
Name | Type |
---|---|
random_string.kibana-encryptionKey | resource |
aws_route53_zone.kibana_zone | data source |
aws_subnet.selected | data source |
aws_vpc.selected | data source |
Name | Description | Type | Default | Required |
---|---|---|---|---|
alb_internal | If true, the LB will be internal. | bool |
false |
no |
ami_id | Image for host EC2 instances. If not specified, the latest Amazon image will be used. | string |
null |
no |
asg_subnets | Auto Scaling Group Subnets. | list(string) |
n/a | yes |
elasticsearch_cluster_name | Elasticsearch cluster name. | string |
n/a | yes |
elasticsearch_url | URL of Elasticsearch masters. | string |
n/a | yes |
environment | Name of environment. | string |
"development" |
no |
internet_gateway_id | Internet gateway id. Usually created by 'infrahouse/service-network/aws' | string |
n/a | yes |
kibana_system_password | Password for kibana_system user. This user is an Elasticsearch built-in user. | string |
n/a | yes |
load_balancer_subnets | Load Balancer Subnets. | list(string) |
n/a | yes |
ssh_cidr_block | CIDR range that is allowed to SSH into the backend instances | string |
null |
no |
ssh_key_name | ssh key name installed in ECS host instances. | string |
n/a | yes |
zone_id | Zone where DNS records will be created for the service and certificate validation. | string |
n/a | yes |
Name | Description |
---|---|
kibana_password | n/a |
kibana_url | n/a |
kibana_username | n/a |