mirror of
https://github.com/Infisical/infisical.git
synced 2026-01-08 23:18:05 -05:00
Adding self-hosting guides to existing documentation.
Updating: - Docker compose - overview (introduction) - kubernetes-helm Adding: - AWS - GCP
This commit is contained in:
@@ -312,7 +312,9 @@
|
||||
"self-hosting/deployment-options/standalone-infisical",
|
||||
"self-hosting/deployment-options/docker-swarm",
|
||||
"self-hosting/deployment-options/docker-compose",
|
||||
"self-hosting/deployment-options/kubernetes-helm"
|
||||
"self-hosting/deployment-options/kubernetes-helm",
|
||||
"self-hosting/deployment-options/aws-native",
|
||||
"self-hosting/deployment-options/gcp-native"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
229
docs/self-hosting/deployment-options/aws-native.mdx
Normal file
229
docs/self-hosting/deployment-options/aws-native.mdx
Normal file
@@ -0,0 +1,229 @@
|
||||
---
|
||||
title: "AWS (ECS with Fargate)"
|
||||
description: "Deploy Infisical securely on AWS using ECS Fargate, RDS, ElastiCache, and ALB."
|
||||
---
|
||||
Learn how to deploy **Infisical** on Amazon Web Services using **Elastic Container Service (ECS)** with Fargate. This guide covers setting up Infisical in a production-ready AWS environment using **Amazon RDS** (PostgreSQL) for the database, **Amazon ElastiCache** (Redis) for caching, and an **Application Load Balancer (ALB)** for routing traffic. We will also configure secure secret storage, IAM roles, logging, and auto-scaling to ensure the deployment is robust, secure, and highly available.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- An AWS account with permissions to create VPCs, ECS clusters, RDS, ElastiCache, and ALB resources.
|
||||
- Basic knowledge of AWS networking (VPC, subnets, security groups) and ECS concepts.
|
||||
- AWS CLI set up (optional, for using CLI examples instead of the AWS Management Console).
|
||||
- An Infisical Docker image tag (find a specific version on Docker Hub to use for your deployment — avoid using `latest` in production).
|
||||
|
||||
|
||||
<Steps>
|
||||
<Step title="Set up network infrastructure (VPC, subnets, security groups)">
|
||||
To host Infisical, first prepare an AWS Virtual Private Cloud (VPC) network:
|
||||
|
||||
- **VPC & Subnets:** Create a VPC spanning at least two Availability Zones. In each AZ, create one **public subnet** (for the load balancer) and one **private subnet** (for ECS tasks, RDS, and Redis). Ensure the subnets have appropriate route tables (public subnets with a route to an Internet Gateway, private subnets with a route to a NAT Gateway).
|
||||
- **NAT Gateway:** Deploy a NAT Gateway in a public subnet to allow outbound internet access from the private subnets (for pulling container images, sending emails, etc.). Alternatively, use VPC Endpoints (e.g. for ECR, S3, SES) to minimize internet egress.
|
||||
- **Security Groups:** Create security groups to control traffic:
|
||||
- An **ALB security group** allowing inbound HTTP (port 80) and HTTPS (port 443) from the internet (0.0.0.0/0) as needed.
|
||||
- An **ECS tasks security group** allowing inbound traffic on Infisical’s port (8080) *only* from the ALB’s security group. Also allow necessary egress for the tasks (for example, to the internet via NAT or to other AWS services in the VPC).
|
||||
- Lock down your RDS and Redis instances with their own security groups that allow access **only** from the ECS tasks’ security group on the appropriate ports (5432 for Postgres, 6379 for Redis).
|
||||
|
||||
Attach the security group for the ALB to your load balancer, and the ECS tasks security group to the Infisical service tasks (the ECS service will apply it to tasks). With this setup, the ALB can reach the Infisical containers, and the containers can reach the database and cache, while external access to the containers is blocked except through the ALB.
|
||||
</Step>
|
||||
|
||||
<Step title="Provision the database (PostgreSQL) and cache (Redis)">
|
||||
Set up the persistence layers for Infisical:
|
||||
|
||||
- **Amazon RDS (PostgreSQL):** Create a PostgreSQL database instance (e.g. db.t3.small or larger depending on load) in the private subnets. Enable **Multi-AZ** deployment for high availability (this creates a standby in a second AZ). Note the database endpoint, port, database name, username, and password. For security, **disable public accessibility** on the instance and ensure it uses the RDS security group that only allows the ECS tasks to connect. Enable automated backups with a retention period (to support point-in-time recovery).
|
||||
- **Amazon ElastiCache (Redis):** Launch a Redis cache cluster (select Redis engine) in the private subnets. For production, use a **Redis replication group** with Multi-AZ enabled (primary and replica in different AZs) for high availability. If available, enable encryption in-transit and at-rest for Redis. Use the cache security group to restrict access to the ECS tasks. Note the Redis primary endpoint (and port).
|
||||
|
||||
For instance sizing recommendations, see our [hardware requirements](/self-hosting/configuration/requirements.mdx). Once these services are up, you should have connection details for the database and cache:
|
||||
- **Database URI** (connection string) – e.g. `postgresql://<username>:<password>@<db-endpoint>:5432/<dbname>`
|
||||
- **Redis URI** – e.g. `redis://:<password>@<cache-endpoint>:6379`
|
||||
|
||||
Make sure these are accessible from the ECS subnets and security group (test connectivity if possible).
|
||||
</Step>
|
||||
|
||||
<Step title="Securely store Infisical secrets and configuration">
|
||||
Infisical requires certain secrets and configuration values to run. You should generate and store these securely using AWS managed secret storage services rather than hard-coding them:
|
||||
|
||||
- **Encryption Key (`ENCRYPTION_KEY`):** This key is used to encrypt secrets in the database. Generate a random 16-byte hex string (32 hex characters). For example, you can run `openssl rand -hex 16` to get a value.
|
||||
- **Authentication Secret (`AUTH_SECRET`):** This secret is used for JWT signing. Generate a random 32-byte base64 string (e.g. `openssl rand -base64 32`).
|
||||
- **Database Connection URI (`DB_CONNECTION_URI`):** Construct the Postgres URI for the RDS instance (as noted in the previous step).
|
||||
- **Redis URL (`REDIS_URL`):** Construct the Redis connection URL (including the password if your Redis instance uses one).
|
||||
- **Site URL (`SITE_URL`):** The URL where your Infisical instance will be accessed by users. For now, this can be an HTTP URL of your ALB (or a placeholder domain); later, you will switch it to your HTTPS domain.
|
||||
|
||||
Use AWS **Parameter Store (SSM)** or **Secrets Manager** to store these values as secure parameters/secrets. The ECS task definition will fetch them at runtime. For example, to store the `ENCRYPTION_KEY` you can use:
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Parameter Store (SSM)">
|
||||
```bash
|
||||
aws ssm put-parameter --name "/infisical/ENCRYPTION_KEY" --value "<YOUR_RANDOM_HEX_KEY>" --type "SecureString"
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="Secrets Manager">
|
||||
```bash
|
||||
aws secretsmanager create-secret --name "InfisicalEncryptionKey" --secret-string "<YOUR_RANDOM_HEX_KEY>"
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
Repeat the above for `AUTH_SECRET`, `DB_CONNECTION_URI`, `REDIS_URL`, etc. (adjusting commands and names accordingly). Using a consistent naming convention (e.g. prefix parameters with `/infisical/`) can help with organization and IAM policies.
|
||||
|
||||
<Warning>
|
||||
Do **not** use Infisical’s example credentials (from the sample `.env.example` file) in production. Always generate a unique **ENCRYPTION_KEY** and **AUTH_SECRET** to secure your instance.
|
||||
</Warning>
|
||||
</Step>
|
||||
|
||||
<Step title="Set up an ECS cluster and IAM roles">
|
||||
Next, prepare ECS and IAM resources for the deployment:
|
||||
|
||||
- **ECS Cluster:** Create an ECS cluster (Fargate) if you don’t already have one. No EC2 instances are needed for Fargate — the cluster is a logical grouping for your tasks.
|
||||
- **Task Execution Role:** Ensure you have an IAM role for ECS tasks to use at launch. AWS provides a managed role called **`ecsTaskExecutionRole`**. If not already present in your account, create one and attach the AWS-managed policy **AmazonECSTaskExecutionRolePolicy**. This role allows ECS to pull container images from ECR/Docker Hub and send logs to CloudWatch.
|
||||
- **Task Role:** Create a separate IAM role (e.g. `InfisicalTaskRole`) that the Infisical container will assume at runtime. This role should have permissions to access the secrets you stored in SSM or Secrets Manager. For example, you might attach a policy allowing `ssm:GetParameter` (and `GetParameters`) for the parameter paths you created, and/or `secretsmanager:GetSecretValue` for the specific secrets ARNs. Limit the permissions to only the necessary resources (principle of least privilege).
|
||||
- **Associate Roles:** When defining your ECS Task Definition (in the next step), you will specify the **Task Execution Role** and the **Task Role**. The execution role is for ECS agent operations, and the task role is for the application (Infisical) to access AWS resources at runtime.
|
||||
|
||||
Make sure the IAM roles are in place before creating the task definition. This ensures that your task will be able to retrieve secrets and send logs without permission issues.
|
||||
</Step>
|
||||
|
||||
<Step title="Create the Infisical ECS task definition">
|
||||
Define an ECS **Task Definition** for the Infisical service:
|
||||
|
||||
- **Launch type:** Fargate (serverless).
|
||||
- **Resource allocation:** Choose CPU/memory for the task. Infisical can run on a smaller Fargate task for low usage (e.g. 0.25 vCPU & 0.5 GB memory), but adjust based on your expected load.
|
||||
- **Container definition:** Add a container for Infisical:
|
||||
- **Image:** Use the Infisical Docker image and tag (for example, `infisical/infisical:<VERSION>`). It's recommended to pin a specific version.
|
||||
- **Port Mapping:** Container port 8080 (this is the default port Infisical listens on). We will later connect this to the ALB.
|
||||
- **Environment Variables:** Set at least `SITE_URL` (e.g. `http://<ALB-DNS-name>` for now or your custom domain), and **`HOST=0.0.0.0`** so that the container listens on all interfaces (not just localhost). You can also set any other optional env vars as needed (see Infisical docs for additional configurations).
|
||||
- **Secrets:** Instead of putting sensitive values directly, reference the secrets from AWS Parameter Store / Secrets Manager. In the task definition, under the container’s **environment** or **secrets** section, map each required variable to the ARN or name of the secure value. For example, map `ENCRYPTION_KEY` to the SSM Parameter `/infisical/ENCRYPTION_KEY` (or the Secrets Manager ARN if you used Secrets Manager). Do the same for `AUTH_SECRET`, `DB_CONNECTION_URI`, `REDIS_URL`, etc.
|
||||
- **Logging:** Configure the container to send logs to CloudWatch Logs. Use the **awslogs** log driver with a log group name (e.g. `/infisical/production` or `/ecs/infisical`) and a region and log stream prefix. Ensure a CloudWatch Log Group is created (and consider setting a retention policy for it).
|
||||
|
||||
Your container definition will look roughly like this:
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "infisical",
|
||||
"image": "infisical/infisical:<VERSION>",
|
||||
"essential": true,
|
||||
"portMappings": [
|
||||
{
|
||||
"containerPort": 8080,
|
||||
"protocol": "tcp"
|
||||
}
|
||||
],
|
||||
"environment": [
|
||||
{ "name": "SITE_URL", "value": "http://<ALB-DNS-name>" },
|
||||
{ "name": "HOST", "value": "0.0.0.0" }
|
||||
],
|
||||
"secrets": [
|
||||
{ "name": "ENCRYPTION_KEY", "valueFrom": "<SSM or Secrets Manager ARN for ENCRYPTION_KEY>" },
|
||||
{ "name": "AUTH_SECRET", "valueFrom": "<ARN for AUTH_SECRET>" },
|
||||
{ "name": "DB_CONNECTION_URI","valueFrom": "<ARN for DB_CONNECTION_URI>" },
|
||||
{ "name": "REDIS_URL", "valueFrom": "<ARN for REDIS_URL>" }
|
||||
],
|
||||
"logConfiguration": {
|
||||
"logDriver": "awslogs",
|
||||
"options": {
|
||||
"awslogs-group": "/infisical/production",
|
||||
"awslogs-region": "<AWS_REGION>",
|
||||
"awslogs-stream-prefix": "infisical"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
In the task definition JSON, you will also specify:
|
||||
- **executionRoleArn:** the ARN of your `ecsTaskExecutionRole`.
|
||||
- **taskRoleArn:** the ARN of your `InfisicalTaskRole` (the role with SSM/Secrets access).
|
||||
- **requiresCompatibilities:** include `"FARGATE"`, and set the Fargate platform version if needed.
|
||||
- **networkMode:** use `"awsvpc"` (required for Fargate).
|
||||
|
||||
Once the task definition is ready and saved, you can proceed to deploy it as a service.
|
||||
|
||||
<Note>
|
||||
If your environment is air-gapped or has no internet access, push the Infisical image to a private **AWS ECR** repository and use that image in the task definition. Also configure **VPC Endpoints** (PrivateLink) for ECR (and other needed services) so that the tasks can pull the image and access AWS services without a NAT Gateway.
|
||||
</Note>
|
||||
</Step>
|
||||
|
||||
<Step title="Deploy the ECS service with Fargate and an ALB">
|
||||
Now deploy Infisical as an ECS Service and expose it via an Application Load Balancer:
|
||||
|
||||
- **Application Load Balancer:** Create an ALB (if not already created) in your public subnets. Assign the ALB the security group that allows HTTP/HTTPS from the internet. Configure a target group for the ALB (target type = IP, since Fargate uses awsvpc networking). Use **HTTP** (and later HTTPS) on port 80 (and 443) for the listener, and set the target group’s port to 8080. For the health check, you can use the default path ("/") or a specific health endpoint if Infisical provides one (e.g. `/api/health`). Ensure the health check protocol is HTTP for now (we will switch to HTTPS when we add a certificate).
|
||||
- **ECS Service:** Create a new ECS service on your cluster:
|
||||
- Select the task definition revision for Infisical and Fargate launch type.
|
||||
- Choose the cluster VPC and **private subnets** for the tasks (the tasks should not run in public subnets).
|
||||
- Assign the ECS tasks security group to the service (so tasks use that SG allowing port 8080 from ALB and access to DB/Redis).
|
||||
- **Load Balancer Integration:** Enable the service to use the ALB. Choose the ALB and the target group you created. During this setup, you'll specify the container name and container port (8080) to hook into the target group. This allows the ALB to route traffic to the Infisical container. Enable the option to **assign a public IP** to the tasks *only if needed* (usually not necessary since ALB will route traffic).
|
||||
- **Desired count:** Start with at least 1 task (for testing). For production, consider starting with 2 tasks (in different AZs) for high availability.
|
||||
- **Auto Scaling (optional at this point):** You can set up auto-scaling policies for the service to add or remove tasks based on CPU, memory, or request load. This can also be configured later in the service’s **Auto Scaling** settings.
|
||||
|
||||
After creating the service, ECS will launch the tasks. Verify that the task(s) reach a **RUNNING** state and pass the ALB health checks. Once healthy, try accessing Infisical via the ALB’s DNS name (you can find this in the ALB description in AWS). For example, `http://<your-alb-dns-name>` should show the Infisical web interface (sign-up page).
|
||||
|
||||
<Tip>
|
||||
For production, run at least **2 Infisical tasks** spread across different Availability Zones. This ensures zero-downtime deployments (rolling updates) and resilience against an AZ outage.
|
||||
</Tip>
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
After completing the above steps, your Infisical instance should be up and running on AWS. You can now proceed with any necessary post-deployment steps like creating an admin account, configuring SMTP (for emails via AWS SES or other provider), etc. The sections below provide additional guidance for operating your Infisical deployment in a production environment.
|
||||
|
||||
|
||||
|
||||
## Additional Configuration & Best Practices
|
||||
<AccordionGroup>
|
||||
<Accordion title="Backup Strategy">
|
||||
Keeping regular backups is critical for a production deployment:
|
||||
- **Database Backups:** Use Amazon RDS automated backups or snapshots to regularly back up the PostgreSQL database. Ensure you have point-in-time recovery enabled by setting an appropriate retention period for automated backups. It’s also a good practice to periodically take manual snapshots (for example, before major upgrades) and to test restoring those snapshots to validate your backup process.
|
||||
- **Encryption Key Backup:** The `ENCRYPTION_KEY` is required to decrypt your secrets in the database. Store a secure copy of this key in a protected vault or key management system (separately from the running container). If you lose this key, any encrypted data in the database becomes unrecoverable, even if the database is restored from backup. Treat this key as a crown jewel — back it up offline and restrict access.
|
||||
- **Redis (Cache) Backups:** Infisical’s Redis is primarily used as a caching and ephemeral store. It is not required to back up Redis data for normal operations. In case of a Redis node failure, a new node will start empty and Infisical will rebuild cache entries as needed. However, if you use Redis for any persistent state (e.g., session data), consider enabling Redis persistence (AOF or snapshot) and back up those snapshots to S3. For most deployments, focusing on DB backups is sufficient.
|
||||
- **Config & Other Data:** Keep copies of configuration files or environment values (except sensitive ones, which should be in Parameter Store/Secrets Manager anyway). If you have customized Infisical configuration (like custom certificates or plugins), ensure those are backed up as well.
|
||||
- **Disaster Recovery Drills:** Periodically simulate a recovery: for instance, restore the RDS database snapshot to a new instance and spin up Infisical in a test environment using that data and the saved `ENCRYPTION_KEY`. This will verify that your backups and keys are valid and that you know the restore procedure.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Upgrade Instructions">
|
||||
Upgrading Infisical (to apply updates or security patches) should be done carefully to minimize downtime:
|
||||
1. **Plan & Review:** Check Infisical’s release notes for the version you plan to upgrade to. Note any new environment variables or migration steps required. Ensure your current version is supported to upgrade directly (if not, you may need intermediate upgrades).
|
||||
2. **Backup:** Prior to upgrading, take a fresh backup of your PostgreSQL database (snapshot) and ensure you have the current `ENCRYPTION_KEY` secured. This guarantees that you can roll back if something goes wrong.
|
||||
3. **Update Task Definition:** Create a new revision of your ECS task definition with the **image** tag updated to the new Infisical version. Also update any new or changed environment variables required by the new version.
|
||||
4. **Deploy Update:** In the ECS service, set the task definition to the new revision (or if you use the “Deploy” option, choose the new revision). If your service has more than 1 task (as recommended for HA), ECS will perform a rolling update: it will launch a new task with the new version before stopping the old one, ensuring continuity. Monitor the deployment in the ECS console.
|
||||
5. **Monitor Health:** Watch the ALB target group health and the ECS service events. The new tasks should register as healthy (passing health checks). Also monitor the application logs via CloudWatch for any errors during startup (like database migrations).
|
||||
6. **Post-Upgrade Tests:** Once the new version is running, quickly test core functionality (e.g., log into the Infisical dashboard, ensure secrets can be accessed). Verify that background jobs (if any, like secret syncing or integrations) are working.
|
||||
7. **Roll-back Plan:** If the new version is not functioning correctly and you need to roll back, you can revert the ECS service to the previous task definition revision (with the earlier image tag) and redeploy. Having the DB snapshot from before the upgrade is useful in case the new version made breaking changes to the database schema — in such a case, you might need to restore the database to the old snapshot **and** use the old container version.
|
||||
8. **Zero-Downtime Tip:** To achieve zero downtime upgrades, ensure you have at least 2 tasks running during deployment. ECS can bring up new tasks before terminating all old ones. Also configure health check grace periods and deployment preferences (in the ECS service settings) appropriately to avoid premature shutdown of old tasks.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Monitoring & Telemetry">
|
||||
Maintaining visibility into your Infisical deployment is important for reliability and performance:
|
||||
- **AWS CloudWatch Logs:** All Infisical container logs are shipped to CloudWatch (as configured in the task definition). Set up CloudWatch Logs retention as needed (the default is indefinite, but you may choose a retention period). You can search the logs for errors or setup CloudWatch Log Insights queries for common issues. Consider creating CloudWatch Alarms on certain log patterns if critical (e.g., out-of-memory errors).
|
||||
- **Metrics and Auto Scaling:** Enable **CloudWatch Container Insights** for ECS to get CPU, memory, and network metrics for your tasks and cluster. This can help you visualize resource usage. You can create CloudWatch Alarms on high CPU or memory, and tie them to ECS Service Auto Scaling to automatically scale out/in tasks based on demand. For example, you might target keeping CPU utilization at 50% and allow scaling between 2 and 10 tasks.
|
||||
- **Application Health:** The ALB health check provides basic availability monitoring. You can augment this with Route 53 health checks or AWS CloudWatch Synthetics canaries to regularly test the Infisical HTTP(S) endpoint and alert if it's down or responding slowly.
|
||||
- **Infisical Telemetry:** Infisical natively exposes metrics in OpenTelemetry (OTEL) format. You can enable detailed application metrics by setting environment variables (such as `OTEL_TELEMETRY_COLLECTION_ENABLED=true` and choosing an export type). Infisical can expose a `/metrics` endpoint (on a separate port, 9464) for Prometheus scrapes or push metrics to an OTEL collector. By integrating these metrics, you can monitor internal stats like request rates, latency, errors, etc., using tools like **Prometheus/Grafana** or cloud monitoring services.
|
||||
- **Tracing and APM:** If deeper tracing is required, you could run the Infisical container with an OpenTelemetry agent or AWS X-Ray daemon (if Infisical supports it). Check Infisical’s documentation for any distributed tracing support. At minimum, logs and metrics should cover most needs.
|
||||
- **Alerting:** Set up alerts for key events. For example, configure CloudWatch Alarms to email/notify you on high error rates, low available database storage, high CPU on the tasks, etc. Ensure your team is notified proactively of any issue with the Infisical service so you can respond quickly.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="HTTPS Access via ALB">
|
||||
By default, we started with HTTP access. For production use, you should secure the application with HTTPS:
|
||||
- **Acquire Certificate:** Obtain an SSL/TLS certificate for your domain. The easiest way on AWS is to use **AWS Certificate Manager (ACM)** to request a public certificate for your custom domain (you’ll need to prove domain ownership via DNS validation). For example, get a cert for `infisical.yourdomain.com` (or wildcard `*.yourdomain.com` if needed).
|
||||
- **ALB HTTPS Listener:** In the ALB settings, add (or modify) a **HTTPS (443)** listener. Attach the ACM certificate to this listener, and configure the listener’s default rule to forward to the Infisical target group. You can also set the HTTP (80) listener to automatically redirect to HTTPS, ensuring all traffic is encrypted.
|
||||
- **Security Group Update:** Ensure the ALB’s security group allows inbound 443 traffic from the internet. You may choose to restrict port 80 if you force everything to HTTPS (except for the redirection).
|
||||
- **Update SITE_URL:** Change Infisical’s `SITE_URL` environment variable to use **`https://`** and your domain (for example, `https://infisical.yourdomain.com`). This is important because Infisical uses `SITE_URL` when forming links (for emails, redirects, etc.) and for security (CORS and cookie settings).
|
||||
- **DNS Setup:** Create a DNS record for your Infisical domain pointing to the ALB. If you use Route 53 and the domain is managed there, you can create an **ALIAS A record** to the ALB’s DNS name. Otherwise, use a CNAME record. Verify that your domain correctly resolves to the ALB.
|
||||
- **Test HTTPS:** Visit `https://infisical.yourdomain.com` and ensure you can see the Infisical interface over HTTPS without certificate warnings. All traffic between users and the ALB is now encrypted. The ALB will still forward the requests to ECS tasks over HTTP internally, but you can optionally enable encryption there as well (using TLS on the container side) if needed for an extra layer of security.
|
||||
- **Advanced (WAF & Security):** For additional protection, consider enabling **AWS WAF** on the ALB to filter out common web attacks. Also ensure that TLS security policies on the ALB are up-to-date (ACM default is fine for most cases). You may also want to enforce HSTS headers via Infisical’s configuration to ensure browsers always use HTTPS.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Scalability, High Availability, and Disaster Recovery">
|
||||
**Scalability:** Infisical’s architecture is stateless at the application layer, which means you can scale out by running multiple Infisical containers in parallel behind the load balancer. To handle increased load, increase the task count (manually or via auto-scaling). The application logic doesn’t store session or state on the filesystem, so any container can serve any request. Just ensure your database and Redis can handle the throughput:
|
||||
- **Scaling the Database:** Monitor your RDS instance metrics (CPU, memory, connections, read/write IOPS). If nearing limits, you can scale up the instance class vertically. For read-heavy workloads, consider adding a Read Replica (though Infisical primarily performs primary DB operations for secret storage, so read replicas might be less effective unless you direct certain read-only queries to them).
|
||||
- **Scaling Redis:** If caching load increases, you might scale up the Redis node or enable cluster mode (sharding). However, avoid making Redis a single point of failure: in most cases, a primary-replica with auto-failover setup suffices for high availability rather than scaling for performance.
|
||||
|
||||
**High Availability:** We have already incorporated HA principles:
|
||||
- **Multi-AZ Deployments:** By running resources in multiple AZs (ECS tasks, RDS multi-AZ, Redis multi-AZ, ALB across AZs), the system can tolerate an AZ outage. Ensure that your ECS service is configured to spread tasks across AZs (the default for Fargate is to balance across subnets if you specify multiple).
|
||||
- **Multiple Tasks:** Run multiple ECS tasks so that if one container or underlying host fails, others continue serving traffic. The ALB will stop sending traffic to unhealthy tasks. ECS can automatically replace failed tasks.
|
||||
- **Auto Scaling:** As mentioned, set up ECS Service Auto Scaling to automatically add tasks on high load and remove them when load decreases. This helps maintain performance and also adds resilience (more tasks can handle if one goes down unexpectedly).
|
||||
- **Stateless Application:** Since Infisical servers are stateless, you can also perform rolling deployments and routine maintenance without downtime (as long as at least one task is always running to serve traffic).
|
||||
|
||||
**Disaster Recovery (DR):** Prepare for scenarios where an entire region or deployment becomes unavailable:
|
||||
- **Off-site Backups:** Regularly export RDS snapshots to a different region (AWS Backup or manual copy of snapshots to another region). Also consider periodically exporting critical data (like an encrypted dump of the Postgres database) and storing it in a secure location.
|
||||
- **Cross-Region Redundancy:** For higher resilience, you could maintain a warm standby in a separate AWS region. This would involve setting up a duplicate environment (perhaps using a read replica promoted to master in another region for the database, and a secondary Redis or relying on just re-deploying Redis). In an active/passive DR scenario, you would periodically synchronize data (e.g., replicate the database or at least ship backups) and be prepared to deploy the ECS service in the DR region. If Region A fails, you could switch DNS to point to Region B’s ALB after promoting the database there.
|
||||
- **Recovery Procedure:** Document and automate the recovery steps as much as possible. For example, use Infrastructure as Code (CloudFormation/Terraform) to be able to spin up the Infisical stack in a new region quickly. The main piece of unique data you must have for recovery is the latest database backup and the `ENCRYPTION_KEY` (plus any other secret values). With those, a new deployment can be made to read the existing secrets.
|
||||
- **Testing DR:** Just like backups, test your disaster recovery process. This could be as simple as spinning up a staging environment in another region using a snapshot of production data to verify that your team knows how to do it and that Infisical works with a restored database.
|
||||
- **Downtime Considerations:** In the event of a full region outage, there will be some downtime while you execute the DR plan (unless you’ve set up an active-active multi-region which is complex). Assess your business requirements for acceptable downtime and data loss (RPO/RTO) and tailor the DR approach accordingly (e.g., shorter RPO might mean more frequent backups or real-time replication).
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
@@ -2,174 +2,327 @@
|
||||
title: "Docker Compose"
|
||||
description: "Read how to run Infisical with Docker Compose template."
|
||||
---
|
||||
This self-hosting guide will walk you through the steps to self-host Infisical using Docker Compose.
|
||||
|
||||
Deploying Infisical with Docker Compose is one of the fastest ways to get a self-hosted instance running. This method uses a single Docker host to run Infisical and its dependencies (PostgreSQL and Redis) as containers. It’s ideal for trying out Infisical or for small-scale deployments that don’t require high availability.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Docker Compose">
|
||||
## Prerequisites
|
||||
- [Docker](https://docs.docker.com/engine/install/)
|
||||
- [Docker compose](https://docs.docker.com/compose/install/)
|
||||
|
||||
<Warning>
|
||||
This Docker Compose configuration is not designed for high-availability production scenarios.
|
||||
It includes just the essential components needed to set up an Infisical proof of concept (POC).
|
||||
To run Infisical in a highly available manner, give the [Docker Swarm guide](/self-hosting/deployment-options/docker-swarm).
|
||||
</Warning>
|
||||
## Prerequisites
|
||||
|
||||
## Verify prerequisites
|
||||
To verify that Docker compose and Docker are installed on the machine where you plan to install Infisical, run the following commands.
|
||||
- [Docker](https://docs.docker.com/engine/install/)
|
||||
- [Docker compose](https://docs.docker.com/compose/install/)
|
||||
|
||||
Check for docker installation
|
||||
<Warning>
|
||||
This Docker Compose configuration is not designed for high-availability production scenarios.
|
||||
It includes just the essential components needed to set up an Infisical proof of concept (POC).
|
||||
To run Infisical in a highly available manner, give the [Docker Swarm guide](/self-hosting/deployment-options/docker-swarm).
|
||||
</Warning>
|
||||
|
||||
## Verify prerequisites
|
||||
|
||||
To verify that Docker compose and Docker are installed on the machine where you plan to install Infisical, run the following commands.
|
||||
|
||||
Check for docker installation
|
||||
```bash
|
||||
docker
|
||||
```
|
||||
|
||||
Check for docker compose installation
|
||||
```bash
|
||||
docker-compose
|
||||
```
|
||||
|
||||
## Download docker compose file
|
||||
|
||||
You can obtain the Infisical docker compose file by using a command-line downloader such as `wget` or `curl`.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="curl">
|
||||
```bash
|
||||
docker
|
||||
curl -o docker-compose.prod.yml https://raw.githubusercontent.com/Infisical/infisical/main/docker-compose.prod.yml
|
||||
```
|
||||
|
||||
Check for docker compose installation
|
||||
```bash
|
||||
docker-compose
|
||||
```
|
||||
|
||||
## Download docker compose file
|
||||
You can obtain the Infisical docker compose file by using a command-line downloader such as `wget` or `curl`.
|
||||
If your system doesn't have either of these, you can use a equivalent command that works with your machine.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="curl">
|
||||
```bash
|
||||
curl -o docker-compose.prod.yml https://raw.githubusercontent.com/Infisical/infisical/main/docker-compose.prod.yml
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="wget">
|
||||
```bash
|
||||
wget -O docker-compose.prod.yml https://raw.githubusercontent.com/Infisical/infisical/main/docker-compose.prod.yml
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Configure instance credentials
|
||||
Infisical requires a set of credentials used for connecting to dependent services such as Postgres, Redis, etc.
|
||||
The default credentials can be downloaded using the one of the commands listed below.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="curl">
|
||||
```bash
|
||||
curl -o .env https://raw.githubusercontent.com/Infisical/infisical/main/.env.example
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="wget">
|
||||
```bash
|
||||
wget -O .env https://raw.githubusercontent.com/Infisical/infisical/main/.env.example
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
Once downloaded, the credentials file will be saved to your working directly as `.env` file.
|
||||
View all available configurations [here](/self-hosting/configuration/envars).
|
||||
|
||||
<Warning>
|
||||
The default .env file contains credentials that are intended solely for testing purposes.
|
||||
Please generate a new `ENCRYPTION_KEY` and `AUTH_SECRET` for use outside of testing.
|
||||
Instructions to do so, can be found [here](/self-hosting/configuration/envars).
|
||||
</Warning>
|
||||
|
||||
## Start Infisical
|
||||
Run the command below to start Infisical and all related services.
|
||||
|
||||
```bash
|
||||
docker-compose -f docker-compose.prod.yml up
|
||||
```
|
||||
|
||||
</Tab>
|
||||
<Tab title="Podman Compose">
|
||||
Podman Compose is an alternative way to run Infisical using Podman as a replacement for Docker. Podman is backwards compatible with Docker Compose files.
|
||||
|
||||
## Prerequisites
|
||||
- [Podman](https://podman-desktop.io/docs/installation)
|
||||
- [Podman Compose](https://podman-desktop.io/docs/compose)
|
||||
|
||||
<Warning>
|
||||
This Docker Compose configuration is not designed for high-availability production scenarios.
|
||||
It includes just the essential components needed to set up an Infisical proof of concept (POC).
|
||||
To run Infisical in a highly available manner, give the [Docker Swarm guide](/self-hosting/deployment-options/docker-swarm).
|
||||
</Warning>
|
||||
|
||||
|
||||
## Verify prerequisites
|
||||
To verify that Podman compose and Podman are installed on the machine where you plan to install Infisical, run the following commands.
|
||||
|
||||
Check for podman installation
|
||||
```bash
|
||||
podman version
|
||||
```
|
||||
|
||||
Check for podman compose installation
|
||||
```bash
|
||||
podman-compose version
|
||||
```
|
||||
|
||||
## Download Docker Compose file
|
||||
You can obtain the Infisical docker compose file by using a command-line downloader such as `wget` or `curl`.
|
||||
If your system doesn't have either of these, you can use a equivalent command that works with your machine.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="curl">
|
||||
```bash
|
||||
curl -o docker-compose.prod.yml https://raw.githubusercontent.com/Infisical/infisical/main/docker-compose.prod.yml
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="wget">
|
||||
```bash
|
||||
wget -O docker-compose.prod.yml https://raw.githubusercontent.com/Infisical/infisical/main/docker-compose.prod.yml
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Configure instance credentials
|
||||
Infisical requires a set of credentials used for connecting to dependent services such as Postgres, Redis, etc.
|
||||
The default credentials can be downloaded using the one of the commands listed below.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="curl">
|
||||
```bash
|
||||
curl -o .env https://raw.githubusercontent.com/Infisical/infisical/main/.env.example
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="wget">
|
||||
```bash
|
||||
wget -O .env https://raw.githubusercontent.com/Infisical/infisical/main/.env.example
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
<Note>
|
||||
Make sure to rename the `.env.example` file to `.env` before starting Infisical. Additionally it's important that the `.env` file is in the same directory as the `docker-compose.prod.yml` file.
|
||||
</Note>
|
||||
|
||||
## Setup Podman
|
||||
Run the commands below to setup Podman for first time use.
|
||||
```bash
|
||||
podman machine init --now
|
||||
podman machine set --rootful
|
||||
podman machine start
|
||||
```
|
||||
|
||||
<Note>
|
||||
If you are using a rootless podman installation, you can skip the `podman machine set --rootful` command.
|
||||
</Note>
|
||||
|
||||
## Start Infisical
|
||||
Run the command below to start Infisical and all related services.
|
||||
|
||||
```bash
|
||||
podman-compose -f docker-compose.prod.yml up
|
||||
<Tab title="wget">
|
||||
```bash
|
||||
wget -O docker-compose.prod.yml https://raw.githubusercontent.com/Infisical/infisical/main/docker-compose.prod.yml
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Configure instance credentials (.env file)
|
||||
|
||||
Infisical relies on a `.env` file to provide configuration to all services in the deployment stack (e.g. Infisical backend, PostgreSQL, Redis). This file contains:
|
||||
|
||||
- **Secrets**: Keys used for encryption and authentication.
|
||||
- **Database settings**: Postgres username, password, and connection URI.
|
||||
- **Redis connection**: Redis host and port for caching and job queues.
|
||||
- **Application metadata**: Your instance's URL and service behavior toggles.
|
||||
|
||||
Your Infisical instance should now be running on port `80`. To access your instance, visit `http://localhost:80`.
|
||||
The default credentials can be downloaded using the one of the commands listed below.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="curl">
|
||||
```bash
|
||||
curl -o .env https://raw.githubusercontent.com/Infisical/infisical/main/.env.example
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="wget">
|
||||
```bash
|
||||
wget -O .env https://raw.githubusercontent.com/Infisical/infisical/main/.env.example
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
After downloading, open the `.env` file in your preferred editor and review the contents. **You MUST change** the following values before deploying in production:
|
||||
|
||||
- `ENCRYPTION_KEY`: Generate a new one using `openssl rand -hex 16`
|
||||
- `AUTH_SECRET`: Generate a new one using `openssl rand -base64 32`
|
||||
- `POSTGRES_PASSWORD`: Set a strong password
|
||||
- `SITE_URL`: Set to your server's domain name (e.g., `https://secrets.example.com`)
|
||||
|
||||
Other fields (e.g., `POSTGRES_USER`, `POSTGRES_DB`, and `REDIS_URL`) can often remain as default for basic usage.
|
||||
|
||||
<Warning>
|
||||
The `.env` file contains secrets that control the security of your deployment. Do not commit it to version control, and protect access to it with:
|
||||
```bash
|
||||
chmod 600 .env
|
||||
```
|
||||
</Warning>
|
||||
|
||||
## Start Infisical
|
||||
|
||||
Start the deployment with:
|
||||
|
||||
```bash
|
||||
docker-compose -f docker-compose.prod.yml up -d
|
||||
```
|
||||
|
||||
Monitor:
|
||||
|
||||
```bash
|
||||
docker-compose ps
|
||||
docker-compose logs -f
|
||||
```
|
||||
|
||||
Once Infisical is running, visit `http://localhost:80` or your server’s domain to register your admin account.
|
||||
|
||||
</Tab>
|
||||
|
||||
<Tab title="Podman Compose">
|
||||
|
||||
Podman Compose is an alternative way to run Infisical using Podman as a replacement for Docker. Podman is backwards compatible with Docker Compose files.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [Podman](https://podman-desktop.io/docs/installation)
|
||||
- [Podman Compose](https://podman-desktop.io/docs/compose)
|
||||
|
||||
<Warning>
|
||||
This Docker Compose configuration is not designed for high-availability production scenarios.
|
||||
It includes just the essential components needed to set up an Infisical proof of concept (POC).
|
||||
To run Infisical in a highly available manner, give the [Docker Swarm guide](/self-hosting/deployment-options/docker-swarm).
|
||||
</Warning>
|
||||
|
||||
## Verify prerequisites
|
||||
|
||||
```bash
|
||||
podman version
|
||||
podman-compose version
|
||||
```
|
||||
|
||||
## Download Docker Compose file
|
||||
|
||||
<Tabs>
|
||||
<Tab title="curl">
|
||||
```bash
|
||||
curl -o docker-compose.prod.yml https://raw.githubusercontent.com/Infisical/infisical/main/docker-compose.prod.yml
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="wget">
|
||||
```bash
|
||||
wget -O docker-compose.prod.yml https://raw.githubusercontent.com/Infisical/infisical/main/docker-compose.prod.yml
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Configure instance credentials (.env file)
|
||||
|
||||
<Tabs>
|
||||
<Tab title="curl">
|
||||
```bash
|
||||
curl -o .env https://raw.githubusercontent.com/Infisical/infisical/main/.env.example
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="wget">
|
||||
```bash
|
||||
wget -O .env https://raw.githubusercontent.com/Infisical/infisical/main/.env.example
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
<Note>
|
||||
Rename `.env.example` to `.env` and place it in the same directory as the Compose file. Fill in values like `ENCRYPTION_KEY`, `AUTH_SECRET`, and `SITE_URL` before continuing.
|
||||
</Note>
|
||||
|
||||
## Setup Podman
|
||||
|
||||
```bash
|
||||
podman machine init --now
|
||||
podman machine set --rootful
|
||||
podman machine start
|
||||
```
|
||||
|
||||
<Note>
|
||||
If using a rootless Podman install, you can skip `podman machine set --rootful`.
|
||||
</Note>
|
||||
|
||||
## Start Infisical
|
||||
|
||||
```bash
|
||||
podman-compose -f docker-compose.prod.yml up
|
||||
```
|
||||
|
||||
Access the UI via `http://localhost:80`.
|
||||
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
---
|
||||
|
||||
## Optional Configuration and Hardening
|
||||
|
||||
<Accordion title="Enable HTTPS with NGINX (TLS Configuration)">
|
||||
We recommend securing your Infisical deployment with HTTPS using a reverse proxy like NGINX. Here's a step-by-step example.
|
||||
|
||||
1. **Obtain SSL certificates** (e.g. via Let's Encrypt or your own CA). If using Let's Encrypt and `certbot`, you’ll get:
|
||||
- `fullchain.pem`
|
||||
- `privkey.pem`
|
||||
|
||||
2. **Create a folder for NGINX TLS files** in your project root:
|
||||
```bash
|
||||
mkdir -p ./nginx/certs
|
||||
cp fullchain.pem ./nginx/certs/
|
||||
cp privkey.pem ./nginx/certs/
|
||||
```
|
||||
|
||||
3. **Add an NGINX service to `docker-compose.prod.yml`**:
|
||||
```yaml
|
||||
nginx:
|
||||
image: nginx:alpine
|
||||
ports:
|
||||
- "80:80"
|
||||
- "443:443"
|
||||
volumes:
|
||||
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
|
||||
- ./nginx/certs:/etc/nginx/certs:ro
|
||||
depends_on:
|
||||
- infisical
|
||||
```
|
||||
|
||||
4. **Create `./nginx/nginx.conf`** with the following content:
|
||||
```nginx
|
||||
events {}
|
||||
|
||||
http {
|
||||
server {
|
||||
listen 80;
|
||||
server_name secrets.example.com;
|
||||
return 301 https://$host$request_uri;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl;
|
||||
server_name secrets.example.com;
|
||||
|
||||
ssl_certificate /etc/nginx/certs/fullchain.pem;
|
||||
ssl_certificate_key /etc/nginx/certs/privkey.pem;
|
||||
|
||||
location / {
|
||||
proxy_pass http://infisical:8080;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
5. **Update `.env`** to use your HTTPS domain:
|
||||
```env
|
||||
SITE_URL=https://secrets.example.com
|
||||
```
|
||||
|
||||
6. **Restart services**:
|
||||
```bash
|
||||
docker-compose down
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
You should now be able to access Infisical securely via HTTPS.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Externalize Database and Redis">
|
||||
For higher availability, use external services like AWS RDS and ElastiCache.
|
||||
|
||||
Update your `.env`:
|
||||
|
||||
```env
|
||||
DB_CONNECTION_URI=postgresql://<user>:<password>@<host>:5432/<dbname>
|
||||
REDIS_URL=redis://:<password>@<host>:6379
|
||||
```
|
||||
|
||||
Remove or comment out `db` and `redis` services in `docker-compose.prod.yml`.
|
||||
|
||||
Ensure firewall/VPC access to these services is properly configured.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Backup Strategy">
|
||||
Use `pg_dump` to back up your database:
|
||||
|
||||
```bash
|
||||
docker-compose exec postgres pg_dump -U infisical infisical > backup.sql
|
||||
```
|
||||
|
||||
Store backups securely and automate this step via cron or CI workflows.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Enable Prometheus Monitoring">
|
||||
Infisical exposes Prometheus metrics when enabled:
|
||||
|
||||
1. Add to `.env`:
|
||||
```env
|
||||
OTEL_TELEMETRY_COLLECTION_ENABLED=true
|
||||
OTEL_EXPORT_TYPE=prometheus
|
||||
```
|
||||
|
||||
2. Expose port `9464` in `docker-compose.prod.yml`.
|
||||
|
||||
3. Configure your Prometheus to scrape `localhost:9464` or container DNS.
|
||||
|
||||
See [Monitoring Guide](/self-hosting/guides/monitoring-telemetry) for full setup.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Upgrade Instructions">
|
||||
|
||||
Keeping Infisical up-to-date ensures you receive the latest feature updates, performance improvements and security patches. Although we avoid making disruptive changes, we recommend having a current and accessible backup of your Postgres database. For additional information on performing Infisical upgrades, see our [Upgrade Guide](/self-hosting/guides/upgrading-infisical).
|
||||
|
||||
To upgrade Infisical:
|
||||
|
||||
1. Change image tag in `docker-compose.prod.yml`.
|
||||
2. Run:
|
||||
|
||||
```bash
|
||||
docker-compose pull
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
3. Confirm migration logs complete successfully.
|
||||
</Accordion>
|
||||
|
||||
---
|
||||
|
||||
Your Infisical instance should now be running on port `80` (or `443` if using TLS). To access your instance, visit `http://localhost:80` or `https://<your-domain>`.
|
||||
|
||||

|
||||
|
||||
761
docs/self-hosting/deployment-options/gcp-native.mdx
Normal file
761
docs/self-hosting/deployment-options/gcp-native.mdx
Normal file
@@ -0,0 +1,761 @@
|
||||
---
|
||||
title: "GCP (GKE with Cloud SQL & Memorystore)"
|
||||
description: "Deploy Infisical securely on Google Cloud Platform using GKE, Cloud SQL, and Memorystore."
|
||||
---
|
||||
|
||||
Learn how to deploy **Infisical** on Google Cloud Platform using **Google Kubernetes Engine (GKE)** for container orchestration. This guide covers setting up Infisical in a production-ready GCP environment using **Cloud SQL** (PostgreSQL) for the database, **Memorystore** (Redis) for caching, and **Google Cloud Load Balancing** for routing traffic. We will also configure secure secret storage, IAM roles, logging, monitoring, and high availability to ensure the deployment is robust, secure, and scalable.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- A Google Cloud Platform account with permissions to create VPCs, GKE clusters, Cloud SQL instances, Memorystore instances, and Load Balancers.
|
||||
- Basic knowledge of GCP networking (VPC, subnets, firewall rules) and Kubernetes concepts.
|
||||
- `gcloud` CLI installed and configured (for command-line operations).
|
||||
- `kubectl` installed for interacting with your GKE cluster.
|
||||
- `helm` installed (version 3.x) for deploying the Infisical Helm chart.
|
||||
- An Infisical Docker image tag (find a specific version on Docker Hub to use for your deployment — avoid using `latest` in production).
|
||||
|
||||
<Steps>
|
||||
<Step title="Set up network infrastructure (VPC, subnets, firewall rules)">
|
||||
To host Infisical on GCP, first prepare your Virtual Private Cloud (VPC) network infrastructure:
|
||||
|
||||
**VPC & Subnets:**
|
||||
- Create a **VPC-native network** (or use an existing one) that will host your GKE cluster, Cloud SQL instance, and Memorystore instance. VPC-native networking is required for private IP connectivity between GKE and managed services like Cloud SQL and Memorystore.
|
||||
- Create a **subnet** for your GKE cluster with an appropriate IP range. For production deployments, ensure the subnet has sufficient IP addresses for node scaling and pod IP allocation (GKE uses secondary IP ranges for pods and services).
|
||||
- When creating the subnet, define **secondary IP ranges** for Kubernetes pods and services. For example:
|
||||
- Primary range: `10.0.0.0/20` (for nodes)
|
||||
- Secondary range for pods: `10.4.0.0/14`
|
||||
- Secondary range for services: `10.8.0.0/20`
|
||||
|
||||
**Cloud Router & Cloud NAT:**
|
||||
- Deploy a **Cloud Router** in your region to enable dynamic routing.
|
||||
- Create a **Cloud NAT gateway** associated with the Cloud Router to allow outbound internet access from private GKE nodes (for pulling container images from Docker Hub, sending emails, etc.). This is essential if your GKE nodes don't have external IP addresses (which is the recommended security practice).
|
||||
- Configure the NAT gateway to use the subnet(s) where your GKE cluster resides.
|
||||
|
||||
**Firewall Rules:**
|
||||
- GCP's default VPC firewall rules typically allow internal traffic within the VPC. Verify that internal communication is allowed between resources in your VPC.
|
||||
- Create specific firewall rules if needed:
|
||||
- Allow traffic from GKE pods to Cloud SQL on port **5432** (PostgreSQL).
|
||||
- Allow traffic from GKE pods to Memorystore on port **6379** (Redis).
|
||||
- If using Google Cloud Load Balancer, ensure health check ranges can reach your GKE nodes (GCP health checkers use specific IP ranges like `130.211.0.0/22` and `35.191.0.0/16`).
|
||||
- Restrict external SSH access to GKE nodes. Ideally, GKE nodes should only be accessible via the Kubernetes API and internal GCP services.
|
||||
|
||||
**Private Google Access:**
|
||||
- Enable **Private Google Access** on your subnet to allow resources without external IP addresses (like GKE nodes) to access Google APIs and services (such as Container Registry, Secret Manager, etc.).
|
||||
|
||||
With this network foundation, your GKE cluster, Cloud SQL, and Memorystore instances will be able to communicate securely within the VPC, while outbound internet access is controlled through Cloud NAT.
|
||||
</Step>
|
||||
|
||||
<Step title="Provision Google Kubernetes Engine (GKE) cluster">
|
||||
Create a GKE cluster to host your Infisical deployment:
|
||||
|
||||
**Cluster Type & Configuration:**
|
||||
- Create a **regional GKE cluster** for high availability. Regional clusters create control plane replicas and node pools across multiple zones within a region, providing resilience against zone failures.
|
||||
- Alternatively, you can create a **zonal cluster** for cost savings, but regional clusters are strongly recommended for production.
|
||||
- Choose an appropriate **machine type** for your nodes. For Infisical, `n2-standard-4` (4 vCPUs, 16 GB memory) is a good starting point, but adjust based on your expected load and number of replicas.
|
||||
- Set **node count per zone**. For a regional cluster with 3 zones, starting with 1 node per zone (3 total) provides a baseline. For production, consider 2+ nodes per zone for redundancy.
|
||||
|
||||
**Networking Configuration:**
|
||||
- Use the **VPC-native** cluster mode (this is the default for new clusters).
|
||||
- Select the VPC and subnet you created in the previous step.
|
||||
- Specify the secondary IP ranges for pods and services that you defined earlier.
|
||||
- Disable **external IP addresses** on nodes for improved security (requires Cloud NAT for outbound access).
|
||||
|
||||
**Security & Access:**
|
||||
- Enable **Workload Identity** on the cluster. Workload Identity is the recommended way to allow GKE pods to authenticate to Google Cloud services using IAM. This is more secure than using service account keys.
|
||||
- Enable **VPC-native security** features like Network Policies if you need fine-grained pod-to-pod traffic control.
|
||||
- Consider enabling **Shielded GKE Nodes** for additional security (secure boot, integrity monitoring).
|
||||
- Enable **Binary Authorization** if you want to enforce that only verified container images can be deployed.
|
||||
|
||||
**Example cluster creation command:**
|
||||
|
||||
```bash
|
||||
gcloud container clusters create infisical-cluster \
|
||||
--region us-central1 \
|
||||
--machine-type n2-standard-4 \
|
||||
--num-nodes 1 \
|
||||
--enable-ip-alias \
|
||||
--network YOUR_VPC_NAME \
|
||||
--subnetwork YOUR_SUBNET_NAME \
|
||||
--cluster-secondary-range-name PODS_RANGE_NAME \
|
||||
--services-secondary-range-name SERVICES_RANGE_NAME \
|
||||
--enable-private-nodes \
|
||||
--enable-private-endpoint \
|
||||
--master-ipv4-cidr 172.16.0.0/28 \
|
||||
--no-enable-basic-auth \
|
||||
--no-issue-client-certificate \
|
||||
--enable-stackdriver-kubernetes \
|
||||
--enable-autoscaling \
|
||||
--min-nodes 1 \
|
||||
--max-nodes 5 \
|
||||
--enable-autorepair \
|
||||
--enable-autoupgrade \
|
||||
--enable-workload-identity \
|
||||
--workload-pool=PROJECT_ID.svc.id.goog
|
||||
```
|
||||
|
||||
**Connect to the cluster:**
|
||||
|
||||
After creation, configure `kubectl` to connect to your cluster:
|
||||
|
||||
```bash
|
||||
gcloud container clusters get-credentials infisical-cluster --region us-central1
|
||||
```
|
||||
|
||||
**Verify cluster access:**
|
||||
|
||||
```bash
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
You should see your nodes listed and in a `Ready` state.
|
||||
|
||||
<Note>
|
||||
For private GKE clusters (where the control plane is not publicly accessible), you'll need to access the cluster from within the VPC (via a bastion host or Cloud Shell) or configure authorized networks to allow your IP to access the control plane endpoint.
|
||||
</Note>
|
||||
</Step>
|
||||
|
||||
<Step title="Provision Cloud SQL for PostgreSQL">
|
||||
Set up the PostgreSQL database for Infisical using Google Cloud SQL:
|
||||
|
||||
**Create Cloud SQL Instance:**
|
||||
- In the GCP Console, navigate to **SQL** and create a new instance.
|
||||
- Select **PostgreSQL** as the database engine. Choose a recent stable version (e.g., PostgreSQL 15 or 16).
|
||||
- Choose an **instance type**. For production, use a machine type with sufficient resources (e.g., `db-n1-standard-2` with 2 vCPUs and 7.5 GB memory as a starting point). Adjust based on your expected database load.
|
||||
- Select the **region** where your GKE cluster is located to minimize latency.
|
||||
|
||||
**High Availability Configuration:**
|
||||
- Enable **High Availability (HA)** to create a standby replica in a different zone within the same region. This provides automatic failover in case the primary instance fails.
|
||||
- HA in Cloud SQL creates a synchronous standby instance, ensuring zero data loss during failover.
|
||||
|
||||
**Storage Configuration:**
|
||||
- Choose **SSD** storage for better performance.
|
||||
- Set an appropriate storage size (e.g., 20 GB to start, with automatic storage increases enabled).
|
||||
- Enable **automatic storage increase** to prevent running out of space.
|
||||
|
||||
**Networking Configuration:**
|
||||
- Under **Connections**, select **Private IP** and choose your VPC network.
|
||||
- Cloud SQL will automatically create a private service connection to your VPC using VPC peering. This allows your GKE pods to connect to the database using a private IP address without traversing the public internet.
|
||||
- **Disable public IP** for enhanced security (your GKE cluster will access the database via private IP).
|
||||
- Note: Setting up private IP requires enabling the Service Networking API and allocating an IP range for private service connection. GCP will guide you through this in the console.
|
||||
|
||||
**Database Configuration:**
|
||||
- Create a **database** for Infisical (e.g., `infisical`).
|
||||
- Create a **user** with a strong password (e.g., `infisical_user`). Save these credentials securely—you'll need them later.
|
||||
|
||||
**Backup & Recovery:**
|
||||
- Enable **automated backups** with point-in-time recovery (PITR). Set a backup window that doesn't overlap with high-traffic periods.
|
||||
- Set a **backup retention period** (e.g., 7 days or more for production).
|
||||
- Consider enabling **binary logging** for PITR support.
|
||||
|
||||
**Security:**
|
||||
- Enable **encryption at rest** (enabled by default).
|
||||
- Consider using **Customer-Managed Encryption Keys (CMEK)** via Cloud KMS for additional control.
|
||||
|
||||
**Connection Details:**
|
||||
|
||||
After the instance is created, note the following:
|
||||
- **Private IP address** (e.g., `10.x.x.x`)
|
||||
- **Connection name** (format: `project:region:instance`)
|
||||
- **Database name** (e.g., `infisical`)
|
||||
- **Username** and **Password**
|
||||
|
||||
Your **DB_CONNECTION_URI** will be in the format:
|
||||
|
||||
```
|
||||
postgresql://infisical_user:password@private-ip:5432/infisical
|
||||
```
|
||||
|
||||
<Tip>
|
||||
For enhanced security, you can use **Cloud SQL IAM authentication** instead of password-based authentication. This allows your GKE pods to authenticate to Cloud SQL using their Workload Identity, eliminating the need to store database passwords.
|
||||
</Tip>
|
||||
</Step>
|
||||
|
||||
<Step title="Provision Memorystore for Redis">
|
||||
Set up Redis caching for Infisical using Google Memorystore:
|
||||
|
||||
**Create Memorystore Instance:**
|
||||
- In the GCP Console, navigate to **Memorystore** (under Databases) and create a new Redis instance.
|
||||
- Select the **Redis tier**: Use **Standard Tier** for production to get high availability with automatic failover. Standard tier creates a primary and replica in different zones.
|
||||
- Choose the **Redis version** (use a recent stable version, e.g., Redis 7.x).
|
||||
|
||||
**Capacity & Performance:**
|
||||
- Set the **memory capacity**. For Infisical, start with **1 GB** and scale up as needed based on your caching requirements.
|
||||
- Note the **read replicas** option: Standard tier automatically provides one read replica for failover. If you need additional read replicas for read-heavy workloads, you can configure them (though Infisical primarily uses Redis for caching and ephemeral data).
|
||||
|
||||
**Networking Configuration:**
|
||||
- Select your **VPC network** (the same VPC where your GKE cluster resides).
|
||||
- Choose the **region** matching your GKE cluster and Cloud SQL instance.
|
||||
- Memorystore will assign a **private IP address** from your VPC's IP range. This IP is directly accessible from your GKE pods.
|
||||
|
||||
**Security & Authentication:**
|
||||
- **Important**: Memorystore for Redis does **not currently support AUTH** (password-based authentication). Security relies on VPC isolation and firewall rules.
|
||||
- Ensure that only your GKE cluster's pods can access the Memorystore IP by using appropriate firewall rules or VPC security controls.
|
||||
- Consider enabling **in-transit encryption (TLS)** if available for your Redis version. As of this writing, Memorystore does not support TLS for in-transit encryption in all configurations, so verify current capabilities.
|
||||
|
||||
**Backup & Maintenance:**
|
||||
- Configure a **maintenance window** for automated updates.
|
||||
- Standard tier provides **automatic backups**. You can also trigger manual snapshots for point-in-time recovery.
|
||||
|
||||
**Connection Details:**
|
||||
|
||||
After creation, note the following:
|
||||
- **Primary endpoint IP** (e.g., `10.x.x.x`)
|
||||
- **Port** (default is `6379`)
|
||||
|
||||
Your **REDIS_URL** will be in the format:
|
||||
|
||||
```
|
||||
redis://memorystore-ip:6379
|
||||
```
|
||||
|
||||
Note: Since Memorystore doesn't require authentication, there's no password in the connection string. If Infisical's configuration requires a password field, you can leave it empty or use a placeholder.
|
||||
|
||||
<Warning>
|
||||
Because Memorystore Redis does not support AUTH passwords, **ensure your firewall rules strictly limit access** to the Redis instance. Only allow connections from your GKE cluster's pod IP ranges. For additional security, you might consider deploying your own Redis cluster in GKE with password authentication and Sentinel for high availability, though this adds operational complexity.
|
||||
</Warning>
|
||||
</Step>
|
||||
|
||||
<Step title="Securely store Infisical secrets and configuration">
|
||||
Infisical requires certain secrets and configuration values to run. Store these securely using Google Secret Manager or Kubernetes secrets:
|
||||
|
||||
**Generate Required Secrets:**
|
||||
|
||||
1. **Encryption Key (ENCRYPTION_KEY):** Generate a random 16-byte hex string (32 hex characters):
|
||||
|
||||
```bash
|
||||
openssl rand -hex 16
|
||||
```
|
||||
|
||||
This key encrypts secrets in the database. **Keep this key secure and backed up** — losing it means losing access to your encrypted data.
|
||||
|
||||
2. **Authentication Secret (AUTH_SECRET):** Generate a random 32-byte base64 string:
|
||||
|
||||
```bash
|
||||
openssl rand -base64 32
|
||||
```
|
||||
|
||||
This secret is used for JWT signing and session management.
|
||||
|
||||
3. **Database Connection URI (DB_CONNECTION_URI):** Use the Cloud SQL connection details from Step 3:
|
||||
|
||||
```
|
||||
postgresql://infisical_user:password@cloud-sql-private-ip:5432/infisical
|
||||
```
|
||||
|
||||
4. **Redis URL (REDIS_URL):** Use the Memorystore connection details from Step 4:
|
||||
|
||||
```
|
||||
redis://memorystore-ip:6379
|
||||
```
|
||||
|
||||
5. **Site URL (SITE_URL):** The URL where users will access Infisical. Initially, this can be a placeholder (e.g., `http://infisical.example.com`). You'll update this to the actual HTTPS URL after configuring the load balancer and SSL.
|
||||
|
||||
**Storage Options:**
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Google Secret Manager (Recommended)">
|
||||
Use Google Secret Manager to store sensitive values with fine-grained IAM access control:
|
||||
|
||||
```bash
|
||||
# Enable the Secret Manager API
|
||||
gcloud services enable secretmanager.googleapis.com
|
||||
|
||||
# Store each secret
|
||||
echo -n "YOUR_ENCRYPTION_KEY" | gcloud secrets create infisical-encryption-key --data-file=-
|
||||
echo -n "YOUR_AUTH_SECRET" | gcloud secrets create infisical-auth-secret --data-file=-
|
||||
echo -n "YOUR_DB_URI" | gcloud secrets create infisical-db-uri --data-file=-
|
||||
echo -n "YOUR_REDIS_URL" | gcloud secrets create infisical-redis-url --data-file=-
|
||||
```
|
||||
|
||||
With Workload Identity enabled, your GKE pods can access these secrets by mounting them as environment variables or files using the **Secrets Store CSI Driver** or by using init containers that fetch secrets at startup.
|
||||
</Tab>
|
||||
|
||||
<Tab title="Kubernetes Secrets">
|
||||
Store secrets directly in Kubernetes (base64 encoded). This is simpler but less secure than Secret Manager:
|
||||
|
||||
```bash
|
||||
# Create a namespace for Infisical
|
||||
kubectl create namespace infisical
|
||||
|
||||
# Create the secret
|
||||
kubectl create secret generic infisical-secrets \
|
||||
--from-literal=ENCRYPTION_KEY="YOUR_ENCRYPTION_KEY" \
|
||||
--from-literal=AUTH_SECRET="YOUR_AUTH_SECRET" \
|
||||
--from-literal=DB_CONNECTION_URI="postgresql://infisical_user:password@cloud-sql-ip:5432/infisical" \
|
||||
--from-literal=REDIS_URL="redis://memorystore-ip:6379" \
|
||||
--from-literal=SITE_URL="http://infisical.example.com" \
|
||||
-n infisical
|
||||
```
|
||||
|
||||
Kubernetes secrets are base64 encoded (not encrypted) by default. For enhanced security, enable **encryption at rest** for Kubernetes secrets in GKE by using Application-layer Secrets Encryption (ALSE) with Cloud KMS.
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
**Configure IAM for Secret Access (if using Secret Manager):**
|
||||
|
||||
If using Google Secret Manager with Workload Identity:
|
||||
|
||||
1. Create a Google Cloud IAM service account:
|
||||
|
||||
```bash
|
||||
gcloud iam service-accounts create infisical-gsa \
|
||||
--display-name="Infisical GKE Service Account"
|
||||
```
|
||||
|
||||
2. Grant the service account access to secrets:
|
||||
|
||||
```bash
|
||||
gcloud secrets add-iam-policy-binding infisical-encryption-key \
|
||||
--member="serviceAccount:infisical-gsa@PROJECT_ID.iam.gserviceaccount.com" \
|
||||
--role="roles/secretmanager.secretAccessor"
|
||||
|
||||
# Repeat for other secrets
|
||||
```
|
||||
|
||||
3. Bind the Google service account to a Kubernetes service account (we'll create this in the Helm deployment step):
|
||||
|
||||
```bash
|
||||
gcloud iam service-accounts add-iam-policy-binding \
|
||||
infisical-gsa@PROJECT_ID.iam.gserviceaccount.com \
|
||||
--role roles/iam.workloadIdentityUser \
|
||||
--member "serviceAccount:PROJECT_ID.svc.id.goog[infisical/infisical]"
|
||||
```
|
||||
|
||||
<Warning>
|
||||
**Never commit secrets to version control** or expose them in logs. Always use secure storage mechanisms like Secret Manager or encrypted Kubernetes secrets. The **ENCRYPTION_KEY** is especially critical—back it up in a secure, offline location.
|
||||
</Warning>
|
||||
</Step>
|
||||
|
||||
<Step title="Deploy Infisical using Helm">
|
||||
Deploy Infisical to your GKE cluster using the official Helm chart:
|
||||
|
||||
**Add the Infisical Helm Repository:**
|
||||
|
||||
```bash
|
||||
helm repo add infisical https://dl.cloudsmith.io/public/infisical/helm-charts/helm/charts/
|
||||
helm repo update
|
||||
```
|
||||
|
||||
**Create a Helm Values File:**
|
||||
|
||||
Create a file named `infisical-values.yaml` with your GCP-specific configuration:
|
||||
|
||||
```yaml
|
||||
# infisical-values.yaml
|
||||
|
||||
# Number of Infisical replicas (2+ for HA)
|
||||
replicaCount: 2
|
||||
|
||||
# Infisical image configuration
|
||||
image:
|
||||
repository: infisical/infisical
|
||||
tag: "v0.X.X" # Use a specific version, not "latest"
|
||||
pullPolicy: IfNotPresent
|
||||
|
||||
# Service configuration
|
||||
service:
|
||||
type: ClusterIP # We'll use an Ingress/Load Balancer
|
||||
port: 8080
|
||||
|
||||
# Ingress configuration (for HTTPS access via Google Cloud Load Balancer)
|
||||
ingress:
|
||||
enabled: true
|
||||
className: "gce" # Use GCE ingress controller
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: "gce"
|
||||
kubernetes.io/ingress.global-static-ip-name: "infisical-ip" # Reserve a static IP first
|
||||
networking.gke.io/managed-certificates: "infisical-cert" # For Google-managed SSL
|
||||
hosts:
|
||||
- host: infisical.example.com
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
tls:
|
||||
- hosts:
|
||||
- infisical.example.com
|
||||
secretName: infisical-tls
|
||||
|
||||
# Environment variables (non-sensitive)
|
||||
env:
|
||||
- name: SITE_URL
|
||||
value: "https://infisical.example.com"
|
||||
- name: HOST
|
||||
value: "0.0.0.0"
|
||||
- name: PORT
|
||||
value: "8080"
|
||||
|
||||
# Secrets (sensitive values from Kubernetes secret or Secret Manager)
|
||||
envFrom:
|
||||
- secretRef:
|
||||
name: infisical-secrets
|
||||
|
||||
# Resource limits (adjust based on your load)
|
||||
resources:
|
||||
requests:
|
||||
memory: "512Mi"
|
||||
cpu: "500m"
|
||||
limits:
|
||||
memory: "1Gi"
|
||||
cpu: "1000m"
|
||||
|
||||
# Health checks
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /api/status
|
||||
port: 8080
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 5
|
||||
failureThreshold: 3
|
||||
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /api/status
|
||||
port: 8080
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 5
|
||||
timeoutSeconds: 3
|
||||
failureThreshold: 3
|
||||
|
||||
# Pod disruption budget (for HA during updates)
|
||||
podDisruptionBudget:
|
||||
enabled: true
|
||||
minAvailable: 1
|
||||
|
||||
# Horizontal Pod Autoscaler (optional, for auto-scaling)
|
||||
autoscaling:
|
||||
enabled: true
|
||||
minReplicas: 2
|
||||
maxReplicas: 10
|
||||
targetCPUUtilizationPercentage: 70
|
||||
targetMemoryUtilizationPercentage: 80
|
||||
|
||||
# Service account (if using Workload Identity with Secret Manager)
|
||||
serviceAccount:
|
||||
create: true
|
||||
name: infisical
|
||||
annotations:
|
||||
iam.gke.io/gcp-service-account: infisical-gsa@PROJECT_ID.iam.gserviceaccount.com
|
||||
|
||||
# Pod security context
|
||||
podSecurityContext:
|
||||
fsGroup: 1000
|
||||
runAsNonRoot: true
|
||||
runAsUser: 1000
|
||||
|
||||
# Node affinity and tolerations (optional, for node pool targeting)
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 100
|
||||
podAffinityTerm:
|
||||
labelSelector:
|
||||
matchExpressions:
|
||||
- key: app.kubernetes.io/name
|
||||
operator: In
|
||||
values:
|
||||
- infisical
|
||||
topologyKey: topology.kubernetes.io/zone
|
||||
```
|
||||
|
||||
**Reserve a Static IP Address:**
|
||||
|
||||
Before deploying, reserve a global static IP for your load balancer:
|
||||
|
||||
```bash
|
||||
gcloud compute addresses create infisical-ip --global
|
||||
```
|
||||
|
||||
Verify the IP:
|
||||
|
||||
```bash
|
||||
gcloud compute addresses describe infisical-ip --global
|
||||
```
|
||||
|
||||
**Deploy Infisical:**
|
||||
|
||||
```bash
|
||||
helm install infisical infisical/infisical \
|
||||
--namespace infisical \
|
||||
--create-namespace \
|
||||
--values infisical-values.yaml
|
||||
```
|
||||
|
||||
**Verify Deployment:**
|
||||
|
||||
```bash
|
||||
# Check pods
|
||||
kubectl get pods -n infisical
|
||||
|
||||
# Check service
|
||||
kubectl get svc -n infisical
|
||||
|
||||
# Check ingress
|
||||
kubectl get ingress -n infisical
|
||||
```
|
||||
|
||||
Wait for all pods to be in `Running` state and for the ingress to be assigned an IP address (this may take a few minutes as GCP provisions the load balancer).
|
||||
|
||||
<Note>
|
||||
If you're using **Google-managed SSL certificates**, it can take 15-60 minutes for the certificate to be provisioned and become active. You can check the status with: `kubectl get managedcertificate -n infisical`
|
||||
</Note>
|
||||
</Step>
|
||||
|
||||
<Step title="Configure HTTPS access with SSL/TLS">
|
||||
Secure your Infisical deployment with HTTPS using Google-managed certificates or cert-manager:
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Google-Managed Certificates (Recommended for GKE)">
|
||||
**Create a ManagedCertificate Resource:**
|
||||
|
||||
Create a file named `managed-cert.yaml`:
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.gke.io/v1
|
||||
kind: ManagedCertificate
|
||||
metadata:
|
||||
name: infisical-cert
|
||||
namespace: infisical
|
||||
spec:
|
||||
domains:
|
||||
- infisical.example.com
|
||||
```
|
||||
|
||||
Apply it:
|
||||
|
||||
```bash
|
||||
kubectl apply -f managed-cert.yaml
|
||||
```
|
||||
|
||||
**Update DNS:**
|
||||
|
||||
Get the external IP of your ingress:
|
||||
|
||||
```bash
|
||||
kubectl get ingress -n infisical
|
||||
```
|
||||
|
||||
Create an **A record** in your DNS provider pointing `infisical.example.com` to the ingress IP.
|
||||
|
||||
**Wait for Certificate Provisioning:**
|
||||
|
||||
Google will automatically provision an SSL certificate for your domain. This can take 15-60 minutes. Check status:
|
||||
|
||||
```bash
|
||||
kubectl describe managedcertificate infisical-cert -n infisical
|
||||
```
|
||||
|
||||
Once the status shows `Active`, your site will be accessible over HTTPS.
|
||||
|
||||
**Update SITE_URL:**
|
||||
|
||||
After HTTPS is working, update the `SITE_URL` environment variable to use `https://`:
|
||||
|
||||
```bash
|
||||
kubectl set env deployment/infisical SITE_URL=https://infisical.example.com -n infisical
|
||||
```
|
||||
</Tab>
|
||||
|
||||
<Tab title="cert-manager with Let's Encrypt">
|
||||
**Install cert-manager:**
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.0/cert-manager.yaml
|
||||
```
|
||||
|
||||
**Create a ClusterIssuer:**
|
||||
|
||||
Create `letsencrypt-prod.yaml`:
|
||||
|
||||
```yaml
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: ClusterIssuer
|
||||
metadata:
|
||||
name: letsencrypt-prod
|
||||
spec:
|
||||
acme:
|
||||
server: https://acme-v02.api.letsencrypt.org/directory
|
||||
email: admin@example.com
|
||||
privateKeySecretRef:
|
||||
name: letsencrypt-prod
|
||||
solvers:
|
||||
- http01:
|
||||
ingress:
|
||||
class: gce
|
||||
```
|
||||
|
||||
Apply it:
|
||||
|
||||
```bash
|
||||
kubectl apply -f letsencrypt-prod.yaml
|
||||
```
|
||||
|
||||
**Update Ingress Annotations:**
|
||||
|
||||
In your `infisical-values.yaml`, ensure these annotations are set:
|
||||
|
||||
```yaml
|
||||
ingress:
|
||||
annotations:
|
||||
cert-manager.io/cluster-issuer: "letsencrypt-prod"
|
||||
```
|
||||
|
||||
Redeploy with updated values:
|
||||
|
||||
```bash
|
||||
helm upgrade infisical infisical/infisical \
|
||||
--namespace infisical \
|
||||
--values infisical-values.yaml
|
||||
```
|
||||
|
||||
cert-manager will automatically request and renew certificates from Let's Encrypt.
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
**Verify HTTPS Access:**
|
||||
|
||||
Once the certificate is active, visit `https://infisical.example.com` in your browser. You should see the Infisical login page with a valid SSL certificate (no browser warnings).
|
||||
|
||||
**Force HTTPS Redirect:**
|
||||
|
||||
To redirect all HTTP traffic to HTTPS, add this annotation to your ingress:
|
||||
|
||||
```yaml
|
||||
ingress:
|
||||
annotations:
|
||||
networking.gke.io/v1beta1.FrontendConfig: "ssl-redirect"
|
||||
```
|
||||
|
||||
And create a FrontendConfig:
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.gke.io/v1beta1
|
||||
kind: FrontendConfig
|
||||
metadata:
|
||||
name: ssl-redirect
|
||||
namespace: infisical
|
||||
spec:
|
||||
redirectToHttps:
|
||||
enabled: true
|
||||
```
|
||||
</Step>
|
||||
|
||||
<Step title="Configure Cloud Logging and Monitoring">
|
||||
Set up comprehensive monitoring and logging for your Infisical deployment:
|
||||
|
||||
**Google Cloud Logging:**
|
||||
|
||||
GKE automatically integrates with Cloud Logging. All container logs (stdout/stderr) are sent to Cloud Logging.
|
||||
|
||||
View Infisical logs:
|
||||
- Navigate to **Logging > Logs Explorer** in the GCP Console.
|
||||
- Filter by resource type: `k8s_container`, namespace: `infisical`, container name: `infisical`.
|
||||
|
||||
Create log-based metrics for important events (e.g., authentication failures, errors). Set up **log sinks** to export logs to Cloud Storage, BigQuery, or Pub/Sub for long-term retention or analysis.
|
||||
|
||||
**Google Cloud Monitoring:**
|
||||
|
||||
Enable **GKE monitoring** (should be enabled by default for new clusters). Access Kubernetes metrics by navigating to **Monitoring > Dashboards > GKE** in the GCP Console. View cluster-level metrics: CPU, memory, disk, network. View pod-level metrics for Infisical pods. Create custom dashboards for Infisical-specific metrics.
|
||||
|
||||
**Set Up Alerts:**
|
||||
|
||||
Create alerting policies for critical conditions:
|
||||
|
||||
```bash
|
||||
# Example: Alert on high pod CPU usage
|
||||
gcloud alpha monitoring policies create \
|
||||
--notification-channels=CHANNEL_ID \
|
||||
--display-name="Infisical High CPU" \
|
||||
--condition-display-name="Pod CPU > 80%" \
|
||||
--condition-threshold-value=0.8 \
|
||||
--condition-threshold-duration=300s \
|
||||
--condition-filter='resource.type="k8s_pod" AND resource.labels.namespace_name="infisical"'
|
||||
```
|
||||
|
||||
Common alerts to configure:
|
||||
- High CPU/memory usage on Infisical pods
|
||||
- Pod restart count
|
||||
- Cloud SQL connection errors
|
||||
- Cloud SQL high CPU/memory
|
||||
- Memorystore connection failures
|
||||
- Ingress 5xx error rate
|
||||
- Low pod availability (fewer than minimum replicas running)
|
||||
|
||||
**Infisical Telemetry (OpenTelemetry):**
|
||||
|
||||
Infisical supports OpenTelemetry for application-level metrics and tracing. Enable it by adding these environment variables:
|
||||
|
||||
```yaml
|
||||
env:
|
||||
- name: OTEL_TELEMETRY_COLLECTION_ENABLED
|
||||
value: "true"
|
||||
- name: OTEL_EXPORTER_OTLP_ENDPOINT
|
||||
value: "http://otel-collector.monitoring.svc.cluster.local:4317"
|
||||
```
|
||||
|
||||
You can deploy an OpenTelemetry collector in your cluster and configure it to send metrics to Google Cloud Monitoring or a third-party observability platform.
|
||||
|
||||
**Uptime Checks:**
|
||||
|
||||
Create an uptime check in Cloud Monitoring to verify Infisical availability. Navigate to **Monitoring > Uptime checks** in the GCP Console. Create a new check for `https://infisical.example.com/api/status`. Set check frequency (e.g., every 1 minute). Configure alert notifications if the check fails.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
After completing the above steps, your Infisical instance should be up and running on GCP. You can now proceed with any necessary post-deployment steps like creating an admin account, configuring SMTP (for emails via SendGrid, Gmail API, or other provider), etc. The sections below provide additional guidance for operating your Infisical deployment in a production environment.
|
||||
|
||||
## Additional Configuration & Best Practices
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Backup Strategy">
|
||||
Keeping regular backups is critical for a production deployment.
|
||||
|
||||
**Database Backups:** Use Cloud SQL automated backups or snapshots to regularly back up the PostgreSQL database. Ensure you have point-in-time recovery enabled by setting an appropriate retention period for automated backups. It is also a good practice to periodically take manual snapshots (for example, before major upgrades) and to test restoring those snapshots to validate your backup process. To create a manual backup, use gcloud sql backups create --instance=INSTANCE_NAME. To restore from a backup, use gcloud sql backups restore BACKUP_ID --backup-instance=SOURCE_INSTANCE --restore-instance=TARGET_INSTANCE.
|
||||
|
||||
**Encryption Key Backup:** The ENCRYPTION_KEY is required to decrypt your secrets in the database. Store a secure copy of this key in a protected vault or key management system (separately from the running container). If you lose this key, any encrypted data in the database becomes unrecoverable, even if the database is restored from backup. Treat this key as a crown jewel. Back it up offline and restrict access. Consider storing it in Google Secret Manager with very restricted IAM permissions, and also keep an offline encrypted backup in a secure physical location.
|
||||
|
||||
**Redis Cache Backups:** Infisical Redis is primarily used as a caching and ephemeral store. It is not required to back up Redis data for normal operations. In case of a Redis node failure, a new node will start empty and Infisical will rebuild cache entries as needed. However, if you use Redis for any persistent state (e.g., session data), consider enabling Redis persistence and backing up those snapshots. For most deployments, focusing on database backups is sufficient.
|
||||
|
||||
**Configuration and Other Data:** Keep copies of configuration files or environment values (except sensitive ones, which should be in Secret Manager anyway). If you have customized Infisical configuration (like custom certificates or plugins), ensure those are backed up as well. Store your Helm values files and Kubernetes manifests in version control (with secrets redacted).
|
||||
|
||||
**Disaster Recovery Drills:** Periodically simulate a recovery. For instance, restore the Cloud SQL database snapshot to a new instance and spin up Infisical in a test environment using that data and the saved ENCRYPTION_KEY. This will verify that your backups and keys are valid and that you know the restore procedure. Document the recovery steps and ensure your team is familiar with them.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Upgrade Instructions">
|
||||
Upgrading Infisical to apply updates or security patches should be done carefully to minimize downtime.
|
||||
|
||||
**Plan and Review:** Check Infisical release notes for the version you plan to upgrade to. Note any new environment variables or migration steps required. Ensure your current version is supported to upgrade directly. If not, you may need intermediate upgrades. Review breaking changes and deprecations.
|
||||
|
||||
**Backup:** Prior to upgrading, take a fresh backup of your PostgreSQL database (snapshot) and ensure you have the current ENCRYPTION_KEY secured. This guarantees that you can roll back if something goes wrong. Document your current configuration including all environment variables and Helm values.
|
||||
|
||||
**Update Task Definition:** Create a new revision of your Helm values file with the image tag updated to the new Infisical version. Also update any new or changed environment variables required by the new version. Review the Infisical changelog for any configuration changes needed.
|
||||
|
||||
**Deploy Update:** Use Helm to upgrade your deployment with helm upgrade infisical infisical/infisical --namespace infisical --values infisical-values.yaml. If your service has more than 1 replica (as recommended for HA), Kubernetes will perform a rolling update by launching new pods with the new version before stopping old ones, ensuring continuity. Monitor the deployment with kubectl get pods -n infisical --watch.
|
||||
|
||||
**Monitor Health:** Watch the pod status and logs during the upgrade. Use kubectl logs -f deployment/infisical -n infisical to follow logs in real-time. Check for any errors during startup like database migrations. Monitor the load balancer health checks to ensure new pods are registering as healthy. Review Cloud Monitoring dashboards for any anomalies in metrics (CPU, memory, error rates).
|
||||
|
||||
**Post-Upgrade Tests:** Once the new version is running, quickly test core functionality. Log into the Infisical dashboard, ensure secrets can be accessed, verify that background jobs (if any, like secret syncing or integrations) are working. Test API endpoints and integrations. Verify email sending if configured.
|
||||
|
||||
**Roll-back Plan:** If the new version is not functioning correctly and you need to roll back, you can revert to the previous Helm release using helm rollback infisical -n infisical. Alternatively, update your values file to use the previous image tag and run helm upgrade again. Having the database snapshot from before the upgrade is useful in case the new version made breaking changes to the database schema. In such a case, you might need to restore the database to the old snapshot and use the old container version.
|
||||
|
||||
**Zero-Downtime Tip:** To achieve zero downtime upgrades, ensure you have at least 2 replicas running during deployment. Configure appropriate pod disruption budgets and health check grace periods. Set maxUnavailable to 0 and maxSurge to at least 1 in your deployment strategy to ensure new pods are fully ready before old ones are terminated. Use readiness probes with appropriate initial delays to prevent premature traffic routing to pods that are still initializing.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Monitoring & Telemetry">
|
||||
Maintaining visibility into your Infisical deployment is important for reliability and performance.
|
||||
|
||||
**Google Cloud Logging:** All Infisical container logs are shipped to Cloud Logging as configured automatically by GKE. Set up Cloud Logging retention as needed (the default is 30 days, but you may choose a different retention period based on compliance requirements). You can search the logs for errors or set up Log Analytics queries for common issues. Consider creating CloudWatch Alarms on certain log patterns if critical (e.g., out-of-memory errors, authentication failures, database connection errors).
|
||||
|
||||
**Metrics and Auto Scaling:** Enable Container Insights for GKE to get detailed CPU, memory, and network metrics for your pods and cluster. This can help you visualize resource usage. You can create Cloud Monitoring alerts on high CPU or memory, and tie them to GKE Horizontal Pod Autoscaling to automatically scale out or in pods based on demand. For example, you might target keeping CPU utilization at 70 percent and allow scaling between 2 and 10 pods. Configure the HPA in your Helm values with appropriate metrics and thresholds.
|
||||
|
||||
**Application Health:** The load balancer health check provides basic availability monitoring. You can augment this with Cloud Monitoring uptime checks to regularly test the Infisical HTTPS endpoint and alert if it is down or responding slowly. Set up synthetic monitoring to simulate user journeys and detect issues before users report them.
|
||||
|
||||
**Infisical Telemetry:** Infisical natively exposes metrics in OpenTelemetry (OTEL) format. You can enable detailed application metrics by setting environment variables such as OTEL_TELEMETRY_COLLECTION_ENABLED set to true and choosing an export type. Infisical can expose a metrics endpoint on a separate port for Prometheus scrapes or push metrics to an OTEL collector. By integrating these metrics, you can monitor internal stats like request rates, latency, errors, and more using tools like Prometheus and Grafana or cloud monitoring services. Deploy the OpenTelemetry Collector in your GKE cluster and configure it to export to Google Cloud Monitoring.
|
||||
|
||||
**Tracing and APM:** If deeper tracing is required, run the Infisical container with an OpenTelemetry agent to capture distributed traces. Configure the agent to export traces to Google Cloud Trace or a third-party APM solution. Check Infisical documentation for any distributed tracing support and configuration options. At minimum, logs and metrics should cover most needs, but tracing can help with debugging complex performance issues.
|
||||
|
||||
**Alerting:** Set up alerts for key events. Configure Cloud Monitoring alert policies to email or notify you on high error rates, low available database storage, high CPU on the tasks, pod restarts, ingress errors, and more. Ensure your team is notified proactively of any issue with the Infisical service so you can respond quickly. Use notification channels like email, SMS, PagerDuty, or Slack for critical alerts. Define escalation policies for unacknowledged alerts.
|
||||
|
||||
**Dashboards:** Create custom Cloud Monitoring dashboards that consolidate key metrics for Infisical, Cloud SQL, Memorystore, and GKE. Include charts for request rate, error rate, latency percentiles, pod CPU and memory usage, database connections, cache hit ratio, and more. Share dashboards with your team and display them on monitors for real-time visibility.
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -3,185 +3,214 @@ title: "Kubernetes via Helm Chart"
|
||||
description: "Learn how to use Helm chart to install Infisical on your Kubernetes cluster."
|
||||
---
|
||||
|
||||
**Prerequisites**
|
||||
## Prerequisites
|
||||
|
||||
- You have extensive understanding of [Kubernetes](https://kubernetes.io/)
|
||||
- Installed [Helm package manager](https://helm.sh/) version v3.11.3 or greater
|
||||
- You have [kubectl](https://kubernetes.io/docs/reference/kubectl/kubectl/) installed and connected to your kubernetes cluster
|
||||
|
||||
<Steps>
|
||||
<Step title="Install Infisical Helm repository ">
|
||||
```bash
|
||||
helm repo add infisical-helm-charts 'https://dl.cloudsmith.io/public/infisical/helm-charts/helm/charts/'
|
||||
```
|
||||
```bash
|
||||
helm repo update
|
||||
```
|
||||
</Step>
|
||||
<Step title="Add Helm values">
|
||||
Create a `values.yaml` file. This will be used to configure settings for the Infisical Helm chart.
|
||||
To explore all configurable properties for your values file, [visit this page](https://raw.githubusercontent.com/Infisical/infisical/main/helm-charts/infisical-standalone-postgres/values.yaml).
|
||||
</Step>
|
||||
<Step title="Select Infisical version">
|
||||
By default, the Infisical version set in your helm chart will likely be outdated.
|
||||
Choose the latest Infisical docker image tag from [here](https://hub.docker.com/r/infisical/infisical/tags).
|
||||
|
||||
|
||||
```yaml values.yaml
|
||||
infisical:
|
||||
image:
|
||||
repository: infisical/infisical
|
||||
tag: "<>" #<-- select tag from Dockerhub from the above link
|
||||
pullPolicy: IfNotPresent
|
||||
```
|
||||
<Warning>
|
||||
Do not use the latest docker image tag in production deployments as they can introduce unexpected changes
|
||||
</Warning>
|
||||
</Step>
|
||||
|
||||
<Step title="Configure environment variables">
|
||||
|
||||
To deploy this Helm chart, a Kubernetes secret named `infisical-secrets` must be present in the same namespace where the chart is being deployed.
|
||||
|
||||
For a minimal installation of Infisical, you need to configure `ENCRYPTION_KEY`, `AUTH_SECRET`, `DB_CONNECTION_URI`, `SITE_URL`, and `REDIS_URL`. [Learn more about configuration settings](/self-hosting/configuration/envars).
|
||||
- Installed [Helm package manager](https://helm.sh) version v3.11.3 or greater
|
||||
- You have [kubectl](https://kubernetes.io) installed and connected to your Kubernetes cluster
|
||||
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Proof of concept deployment">
|
||||
For test or proof-of-concept purposes, you may omit `DB_CONNECTION_URI` and `REDIS_URL` from `infisical-secrets`. This is because the Helm chart will automatically provision and connect to the in-cluster instances of Postgres and Redis by default.
|
||||
```yaml simple-values-example.yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: infisical-secrets
|
||||
type: Opaque
|
||||
stringData:
|
||||
AUTH_SECRET: <>
|
||||
ENCRYPTION_KEY: <>
|
||||
SITE_URL: <>
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="Production deployment">
|
||||
For production environments, we recommend using Cloud-based Platform as a Service (PaaS) solutions for PostgreSQL and Redis to ensure high availability. In on-premise setups, it's recommended to configure Redis and Postgres for high availability, either by using Bitnami charts or a custom configuration.
|
||||
1. **Install Infisical Helm repository**
|
||||
|
||||
```yaml simple-values-example.yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: infisical-secrets
|
||||
type: Opaque
|
||||
stringData:
|
||||
AUTH_SECRET: <>
|
||||
ENCRYPTION_KEY: <>
|
||||
REDIS_URL: <>
|
||||
DB_CONNECTION_URI: <>
|
||||
SITE_URL: <>
|
||||
```
|
||||
```bash
|
||||
helm repo add infisical-helm-charts 'https://dl.cloudsmith.io/public/infisical/helm-charts/helm/charts/'
|
||||
```
|
||||
|
||||
<Tip>
|
||||
If you need to configure the SSL certificate for your production Postgres instance, you can use the `DB_ROOT_CERT` environment variable. [Learn more about configuring the SSL certificate](/self-hosting/configuration/envars#aws-rds).
|
||||
</Tip>
|
||||
</Tab>
|
||||
</Tabs>
|
||||
</Step>
|
||||
```bash
|
||||
helm repo update
|
||||
```
|
||||
|
||||
<Step title="Routing traffic to Infisical">
|
||||
By default, this chart uses Nginx as its Ingress controller to direct traffic to Infisical services.
|
||||
2. **Add Helm values**
|
||||
|
||||
```yaml values.yaml
|
||||
ingress:
|
||||
nginx:
|
||||
enabled: true
|
||||
```
|
||||
</Step>
|
||||
Create a `values.yaml` file. This will be used to configure settings for the Infisical Helm chart. To explore all configurable properties for your values file, [visit this page](https://raw.githubusercontent.com/Infisical/infisical/main/helm-charts/infisical-standalone-postgres/values.yaml).
|
||||
|
||||
<Step title="Install the Helm chart ">
|
||||
Once you are done configuring your `values.yaml` file, run the command below.
|
||||
3. **Select Infisical version**
|
||||
|
||||
```bash
|
||||
helm upgrade --install infisical infisical-helm-charts/infisical-standalone --values /path/to/values.yaml
|
||||
```
|
||||
By default, the Infisical version set in your Helm chart values may be outdated. Choose the latest Infisical Docker image tag from [Docker Hub](https://hub.docker.com/r/infisical/infisical).
|
||||
|
||||
<Accordion title="Full helm values example">
|
||||
```yaml values.yaml
|
||||
```yaml title="values.yaml"
|
||||
infisical:
|
||||
image:
|
||||
repository: infisical/infisical
|
||||
tag: "<>" # <-- select the tag from Docker Hub
|
||||
pullPolicy: IfNotPresent
|
||||
```
|
||||
|
||||
nameOverride: "infisical"
|
||||
fullnameOverride: "infisical"
|
||||
*Do not use the `latest` Docker image tag in production deployments, as it can introduce unexpected changes.*
|
||||
|
||||
infisical:
|
||||
enabled: true
|
||||
name: infisical
|
||||
autoDatabaseSchemaMigration: true
|
||||
fullnameOverride: ""
|
||||
podAnnotations: {}
|
||||
deploymentAnnotations: {}
|
||||
replicaCount: 6
|
||||
4. **Configure environment variables**
|
||||
|
||||
image:
|
||||
repository: infisical/infisical
|
||||
tag: "v0.46.2-postgres"
|
||||
pullPolicy: IfNotPresent
|
||||
To deploy this Helm chart, a Kubernetes secret named `infisical-secrets` must exist in the same namespace where the chart will be deployed. For a minimal installation of Infisical, you need to configure `ENCRYPTION_KEY`, `AUTH_SECRET`, `DB_CONNECTION_URI`, `SITE_URL`, and `REDIS_URL` in this secret. [Learn more about configuration settings](/self-hosting/configuration/envars).
|
||||
|
||||
affinity: {}
|
||||
kubeSecretRef: "infisical-secrets"
|
||||
service:
|
||||
annotations: {}
|
||||
type: ClusterIP
|
||||
nodePort: ""
|
||||
<Tabs>
|
||||
<Tab title="Proof of concept deployment">
|
||||
For test or proof-of-concept purposes, you may omit `DB_CONNECTION_URI` and `REDIS_URL` from the `infisical-secrets` secret. The Helm chart will automatically provision and use in-cluster instances of Postgres and Redis by default.
|
||||
|
||||
resources:
|
||||
limits:
|
||||
memory: 210Mi
|
||||
requests:
|
||||
cpu: 200m
|
||||
```yaml title="infisical-secrets.yaml"
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: infisical-secrets
|
||||
type: Opaque
|
||||
stringData:
|
||||
AUTH_SECRET: <>
|
||||
ENCRYPTION_KEY: <>
|
||||
SITE_URL: <>
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="Production deployment">
|
||||
For production environments, we **recommend** using cloud-managed (PaaS) PostgreSQL and Redis services to ensure high availability. In on-premise setups, configure Redis and Postgres for HA (for example, using Bitnami Helm charts or a custom high-availability configuration).
|
||||
|
||||
```yaml title="infisical-secrets.yaml"
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: infisical-secrets
|
||||
type: Opaque
|
||||
stringData:
|
||||
AUTH_SECRET: <>
|
||||
ENCRYPTION_KEY: <>
|
||||
REDIS_URL: <>
|
||||
DB_CONNECTION_URI: <>
|
||||
SITE_URL: <>
|
||||
```
|
||||
If you need to configure an SSL root certificate for your production Postgres instance, you can set the `DB_ROOT_CERT` environment variable in the secret (with the base64-encoded CA certificate).
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
5. **Routing traffic to Infisical**
|
||||
|
||||
By default, this chart uses an Nginx Ingress to route external traffic to Infisical.
|
||||
|
||||
```yaml title="values.yaml"
|
||||
ingress:
|
||||
nginx:
|
||||
enabled: true
|
||||
```
|
||||
|
||||
6. **Install the Helm chart**
|
||||
|
||||
Once you are done configuring your `values.yaml` file, run the command below to install Infisical:
|
||||
|
||||
```bash
|
||||
helm upgrade --install infisical infisical-helm-charts/infisical-standalone --values /path/to/values.yaml
|
||||
```
|
||||
|
||||
<Accordion title="Full helm values example">
|
||||
```yaml title="values.yaml"
|
||||
nameOverride: "infisical"
|
||||
fullnameOverride: "infisical"
|
||||
|
||||
infisical:
|
||||
enabled: true
|
||||
name: infisical
|
||||
autoDatabaseSchemaMigration: true
|
||||
fullnameOverride: ""
|
||||
podAnnotations: {}
|
||||
deploymentAnnotations: {}
|
||||
replicaCount: 6
|
||||
|
||||
image:
|
||||
repository: infisical/infisical
|
||||
tag: "v0.46.2-postgres"
|
||||
pullPolicy: IfNotPresent
|
||||
|
||||
affinity: {}
|
||||
kubeSecretRef: "infisical-secrets"
|
||||
service:
|
||||
annotations: {}
|
||||
type: ClusterIP
|
||||
nodePort: ""
|
||||
|
||||
resources:
|
||||
limits:
|
||||
memory: 210Mi
|
||||
requests:
|
||||
cpu: 200m
|
||||
|
||||
ingress:
|
||||
enabled: true
|
||||
hostName: ""
|
||||
ingressClassName: nginx
|
||||
nginx:
|
||||
enabled: true
|
||||
annotations: {}
|
||||
tls: []
|
||||
|
||||
postgresql:
|
||||
enabled: true
|
||||
name: "postgresql"
|
||||
fullnameOverride: "postgresql"
|
||||
auth:
|
||||
username: infisical
|
||||
password: root
|
||||
database: infisicalDB
|
||||
|
||||
redis:
|
||||
enabled: true
|
||||
name: "redis"
|
||||
fullnameOverride: "redis"
|
||||
cluster:
|
||||
enabled: false
|
||||
usePassword: true
|
||||
auth:
|
||||
password: "mysecretpassword"
|
||||
architecture: standalone
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
7. **Access Infisical**
|
||||
|
||||
After deployment, please wait a few minutes for all pods to reach the Running state. Once the pods are up, you can access Infisical via the external address provided by the Kubernetes Ingress. For example, if using a cloud load balancer, find the external IP/hostname by running `kubectl get ingress` in the Infisical namespace, and then navigate to `http://<external-ip-or-host>` (or `https://` if TLS is enabled).
|
||||
|
||||
## Additional Configuration & Best Practices
|
||||
|
||||
<AccordionGroup>
|
||||
|
||||
<Accordion title="Backup Strategy">
|
||||
It is critical to regularly back up Infisical's primary datastore (PostgreSQL) and also have a strategy for Redis:
|
||||
- **PostgreSQL**: If you use the in-cluster Postgres deployed by the Helm chart, schedule routine backups (for example, using `pg_dump` or volume snapshots) to prevent data loss. If you use a managed cloud database (such as AWS RDS or GCP Cloud SQL), leverage its automated backups and point-in-time recovery features. Always verify your backups and have a restore procedure in place.
|
||||
- **Redis**: Infisical uses Redis as a transient cache to improve performance. Loss of Redis data will not lose any secrets (only cause cache misses), so strict backups are not as critical. For managed Redis services, you may enable snapshots or backups as an extra precaution. If using the in-cluster Redis, note that it runs as a single instance without persistence by default; you can enable Redis persistence (e.g., RDB snapshots or AOF) if preserving cache state is important. In general, focus on backups for Redis only if your use-case requires fast cache recovery.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Upgrade Instructions">
|
||||
To upgrade your Infisical instance to a newer version using Helm, follow these steps:
|
||||
1. **Back up your database** (PostgreSQL) before upgrading, especially in production.
|
||||
2. Update your Helm values to use the desired Infisical image version – edit `infisical.image.tag` in your `values.yaml` to the new version (avoid using the `latest` tag in production; pick a specific version).
|
||||
3. Run the Helm upgrade command with your updated values file:
|
||||
```bash
|
||||
helm upgrade --install infisical infisical-helm-charts/infisical-standalone --values /path/to/values.yaml
|
||||
```
|
||||
4. Monitor the upgrade rollout until all Infisical pods are updated and running. You can watch the status with `kubectl get pods` to ensure the new pods come up healthy.
|
||||
5. (Optional) Test upgrades in a staging environment first, and review Infisical’s release notes for any breaking changes when jumping between major versions. This helps minimize surprises during production upgrades.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Monitoring & Telemetry">
|
||||
Infisical can export telemetry data for monitoring:
|
||||
- **Prometheus metrics (pull-based)**: Infisical can expose metrics in Prometheus format. To enable this, set the environment variables `OTEL_TELEMETRY_COLLECTION_ENABLED=true` and `OTEL_EXPORT_TYPE=prometheus` in the `infisical-secrets` config. This will make Infisical serve metrics at the `/metrics` endpoint on port `9464` of the Infisical pods. Ensure this port is accessible: for example, you can add a Service to expose port 9464 of the pods, or if you're using Prometheus Operator, create a ServiceMonitor selecting the Infisical pods. Once set up, Prometheus can scrape these metrics (e.g., request rates, latency, etc.), which you can visualize in Grafana.
|
||||
- **OpenTelemetry (push-based)**: Infisical also supports pushing metrics to an OpenTelemetry collector or APM platform. To use this mode, set `OTEL_TELEMETRY_COLLECTION_ENABLED=true` and `OTEL_EXPORT_TYPE=otlp`, and configure `OTEL_EXPORT_OTLP_ENDPOINT` with the URL of your OTEL collector or monitoring backend. Infisical will then send metrics (in OTLP format) to that endpoint. This approach allows integration with cloud monitoring services (Datadog, New Relic, etc.) or your own OTEL collector for advanced observability. (Ensure network connectivity from Infisical pods to the collector endpoint.)
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Enable HTTPS Access (TLS)">
|
||||
By default, the Infisical Helm chart sets up an HTTP ingress. For production, you should enable HTTPS:
|
||||
- **Cert-Manager automated certificates**: If you have [cert-manager](https://cert-manager.io) installed, you can automate TLS certificate provisioning. First, configure a ClusterIssuer/Issuer (for example, one using Let's Encrypt). Then update the Infisical Helm values to include an annotation and TLS settings for the ingress. For example:
|
||||
```yaml
|
||||
ingress:
|
||||
enabled: true
|
||||
hostName: ""
|
||||
ingressClassName: nginx
|
||||
nginx:
|
||||
enabled: true
|
||||
annotations: {}
|
||||
tls: []
|
||||
|
||||
postgresql:
|
||||
enabled: true
|
||||
name: "postgresql"
|
||||
fullnameOverride: "postgresql"
|
||||
auth:
|
||||
username: infisical
|
||||
password: root
|
||||
database: infisicalDB
|
||||
|
||||
redis:
|
||||
enabled: true
|
||||
name: "redis"
|
||||
fullnameOverride: "redis"
|
||||
cluster:
|
||||
enabled: false
|
||||
usePassword: true
|
||||
auth:
|
||||
password: "mysecretpassword"
|
||||
architecture: standalone
|
||||
annotations:
|
||||
cert-manager.io/cluster-issuer: "letsencrypt-prod"
|
||||
tls:
|
||||
- secretName: infisical-tls
|
||||
hosts:
|
||||
- infisical.example.com
|
||||
```
|
||||
</Accordion>
|
||||
</Step>
|
||||
In this example, cert-manager will request a certificate for **infisical.example.com** and store it in the `infisical-tls` secret, which the ingress will use for TLS termination.
|
||||
- **Manual TLS setup**: Alternatively, you can provide your own TLS certificate. Obtain an SSL certificate (from a CA or self-signed for testing) and create a Kubernetes TLS Secret (e.g., named `infisical-tls`) containing the cert and private key. Then in your Helm `values.yaml`, set the `ingress.tls` section to reference that secret and your host. Ensure your DNS points the host (e.g., `infisical.example.com`) to the ingress controller's IP. The Infisical ingress will then serve HTTPS using the secret you provided.
|
||||
</Accordion>
|
||||
|
||||
<Step title="Access Infisical">
|
||||
After deployment, please wait for 2-5 minutes for all pods to reach a running state. Once a significant number of pods are operational, access the IP address revealed through Ingress by your load balancer.
|
||||
You can find the IP address/hostname by executing the command `kubectl get ingress`.
|
||||

|
||||
</Step>
|
||||
<Step title="Upgrade your instance">
|
||||
To upgrade your instance of Infisical simply update the docker image tag in your Helm values and rerun the command below.
|
||||
<Accordion title="Scalability, HA, and Disaster Recovery">
|
||||
- **Application scaling**: Infisical's application layer is stateless, meaning you can run multiple replicas behind a load balancer to handle higher loads. To scale out, increase the `infisical.replicaCount` in your values (the example above uses 6 replicas) and apply the changes via Helm. You can also configure a Horizontal Pod Autoscaler to automatically adjust the number of Infisical pods based on CPU/memory usage. Running at least 2 replicas in production is recommended for high availability at the application level.
|
||||
- **Database high availability**: For production, use a highly available PostgreSQL deployment. Instead of a single in-cluster database, consider an external managed database with multi-zone redundancy, or deploy Postgres with a replication operator (e.g., the Zalando Postgres Operator) to get a primary-replica setup. The goal is to avoid a single point of failure in the database. Similarly, if Redis is critical for your usage, consider using a Redis mode that supports failover (such as enabling Redis Sentinel or cluster mode in the Helm values, or using a managed Redis service with replication). Note that Infisical can tolerate Redis being temporarily unavailable (it will just be slower), but a highly available Redis will prevent any performance degradation in a failure scenario.
|
||||
- **Disaster recovery**: Plan for how to recover if your cluster or region goes down. Regularly back up the Infisical PostgreSQL data and store backups in an off-site or cloud storage location (as mentioned in Backup Strategy above). In a serious outage, you should be prepared to redeploy Infisical in a new environment and restore the database from backups. For enhanced DR, some organizations set up a standby environment in a different region: for example, a read-replica of the database in another region (or frequent backup shipping), and Infisical application instances that can be quickly started in that region. Because the Infisical application is stateless, recovering from a disaster primarily involves restoring the database and pointing a new Infisical deployment to it. Ensure that secrets like `ENCRYPTION_KEY` and `AUTH_SECRET` are securely stored so the new deployment can use the same keys to decrypt data.
|
||||
</Accordion>
|
||||
|
||||
```bash
|
||||
helm upgrade --install infisical infisical-helm-charts/infisical-standalone --values /path/to/values.yaml
|
||||
```
|
||||
|
||||
<Tip>
|
||||
Always back up your database before each upgrade, especially in a production environment.
|
||||
</Tip>
|
||||
|
||||
</Step>
|
||||
</Steps>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -4,9 +4,16 @@ sidebarTitle: "Introduction"
|
||||
description: "Learn how to self-host Infisical on your own infrastructure."
|
||||
---
|
||||
|
||||
Self-hosting Infisical lets you retain data on your own infrastructure and network.
|
||||
Self-hosting Infisical lets you retain data on your own infrastructure and network. Many organizations choose self-hosting for benefits in compliance and flexibility.
|
||||
|
||||
- Compliance: Deploying Infisical in your own environment helps meet strict regulatory requirements (e.g. SOC 2, HIPAA, FIPS 140-3) by aligning with your existing security controls. All data remains under your governance, and Infisical’s architecture supports compliance needs through strong encryption and access controls.
|
||||
- Flexibility: You have complete control over deployment topology and integrations. Infisical can run in diverse environments. From air-gapped bare-metal servers to Kubernetes clusters or cloud VMs, this flexibility lets you tailor networking, security, and performance configurations to fit your organization’s needs.
|
||||
|
||||
|
||||
|
||||
Choose from a number of deployment options listed below to get started.
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card
|
||||
title="Docker"
|
||||
color="#000000"
|
||||
@@ -15,6 +22,15 @@ Choose from a number of deployment options listed below to get started.
|
||||
>
|
||||
Use the fully packaged docker image to deploy Infisical anywhere.
|
||||
</Card>
|
||||
<Card
|
||||
title="Kubernetes"
|
||||
color="#000000"
|
||||
icon="kubernetes"
|
||||
href="./deployment-options/kubernetes-helm"
|
||||
>
|
||||
Use the fully packaged docker image to deploy Infisical anywhere.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
<CardGroup cols={2}>
|
||||
<Card
|
||||
title="Docker Compose"
|
||||
@@ -25,17 +41,38 @@ Choose from a number of deployment options listed below to get started.
|
||||
Install Infisical using our Docker Compose template.
|
||||
</Card>
|
||||
<Card
|
||||
title="Kubernetes"
|
||||
title="Docker Swarm"
|
||||
color="#000000"
|
||||
icon="gear-complex-code"
|
||||
href="./deployment-options/kubernetes-helm"
|
||||
icon="docker"
|
||||
href="./deployment-options/docker-swarm"
|
||||
>
|
||||
Use our Helm chart to Install Infisical on your Kubernetes cluster.
|
||||
Install Infisical using our Docker Swarm template
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card
|
||||
title="AWS"
|
||||
color="#000000"
|
||||
icon="aws"
|
||||
href="./deployment-options/aws-native"
|
||||
>
|
||||
Deploy Infisical on AWS
|
||||
</Card>
|
||||
<Card
|
||||
title="GCP"
|
||||
color="#000000"
|
||||
icon="google"
|
||||
href="./deployment-options/docker-swarm"
|
||||
>
|
||||
Deploy Infisical on GCP
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
<Card
|
||||
title="Linux package"
|
||||
color="#000000"
|
||||
icon="linux"
|
||||
href="./deployment-options/native/linux-package/installation"
|
||||
>
|
||||
Install Infisical on your system without containers using our Linux package.
|
||||
|
||||
Reference in New Issue
Block a user