How to Connect to GCP Cloud SQL with Cloud SQL Auth Proxy in Docker

Image by WilliamsCreativity on Pixabay

Cloud SQL on the Google Cloud Platform (GCP) is a great service if you want to host your relational databases in the cloud. There are some standard ways to connect to Cloud SQL from your GCP resources like Cloud Run, Compute engine, etc. However, it’s not as well documented regarding how to connect your dockerized application to Cloud SQL.

The Cloud SQL Auth proxy is the recommended way to connect your dockerized applications to Cloud SQL. It provides secure access to your Cloud SQL instances without the need for authorized networks or for configuring SSL.

In this post, we will introduce how to use Cloud SQL Auth proxy in various ways, with a focus on how to write docker-compose.yaml file for connecting dockerized applications to Cloud SQL instances.


Preparation

At this point, I assume you already have a GCP Cloud SQL instance available. The creation of a Cloud SQL instance is generally done by a system or DevOps engineer. However, if you work on your own projects or just want to do it for learning purposes, you can create one by yourself. Head to the GCP console and follow the instructions, it should be fairly straightforward to create one. Remember to create a DB user which will be used later.

If you want to run the code in this post on your personal laptop or desktop, it’s recommended to set up your local environment to work with GCP.

Make sure your Google account or service account has at least the “Cloud SQL Client” role. It can also be assigned the “Cloud SQL Editor” or “Cloud SQL Admin” role if applicable.

Make sure the Cloud SQL Admin API is enabled.


Use the Cloud SQL Auth proxy command directly

There are some occasions when using the Cloud SQL Auth proxy command directly (without using Docker) is preferable. For example, when you want to run the command as a startup script or make it as a service on your bare metal machine or Cloud virtual machine (VM).

Learning the Cloud SQL Auth proxy command is also helpful for use in Docker as the command is actually the same, just the latter is run in a Docker container.

First, we need to download the binary file of the Cloud SQL Auth proxy:

sudo curl -o /usr/local/bin/cloud-sql-proxy \
  https://storage.googleapis.com/cloud-sql-connectors/cloud-sql-proxy/v2.0.0/cloud-sql-proxy.linux.amd64

sudo chmod +x /usr/local/bin/cloud-sql-proxy

You can find the latest version of Cloud SQL Auth proxy in its GitHub repo.

Then we can use Cloud SQL Auth proxy to connect to our Cloud SQL instance, please check the comments below which include context-wise explanations for each command:

# 1. Find the instance name to be used.
#    It should have a format of myproject:myregion:myinstance.
gcloud sql instances describe <INSTANCE_NAME> --format='value(connectionName)'

# 2. Use the long instance connection name returned by 1 for cloud-sql-proxy.
cloud-sql-proxy --port 13306 INSTANCE_CONNECTION_NAME &
# Note that a high port is used to avoid potential port conflicts.
# It may not be needed for your use case.

# 3. When not running on GCP resources and GOOGLE_APPLICATION_CREDENTIALS is
#    not set locally, we need to specify the key file directly.
cloud-sql-proxy --port 13306 INSTANCE_CONNECTION_NAME \
    --credentials-file /local/path/to/service-account-key.json &

# 4. On GCP compute engine, you can connect by a private IP if the compute
#    engine and SQL instance are in the same VPC network.
cloud-sql-proxy --port 13306 INSTANCE_CONNECTION_NAME --private-ip &

# 5. You can connect to multiple Cloud SQL instances at same time, which can
#    be handy for local development as you may need to access multiple
#    databases at the same time.
cloud-sql-proxy "INSTANCE_CONNECTION_NAME_1?port=13306" \
    "INSTANCE_CONNECTION_NAME_2?port=13307" &
# Remember to specify differnt ports for different instances.

Note that an ampersand (&) is put at the end of the cloud-sql-proxy command line so we can run Cloud SQL Auth proxy in another process in the background. In this way, we won’t accidentally close the connection and don’t need to open a new tab if we want to use a SQL client to connect to our SQL instance immediately.

We can use the jobs command to see the processes running in the background. We can also use fg %N or kill %N (where N is the number returned by jobs) to bring the process to the foreground or to kill it.

When you see:

The proxy has started successfully and is ready for new connections!

You can then connect to your SQL instance with your SQL client or in your application with some libraries like SQLAlchemy.

For a MySQL client, the command is:

mysql -h 127.0.0.1 -P 13306 -u DB_USERNAME -p

Note that the host is 127.0.0.1, namely, our local host as we are using a proxy to connect to our database.

If you need to write complex SQL queries, it’s highly recommended to use a graphical tool like DBeaver, which is a free universal tool for managing various databases. It has great support for syntax highlighting, autocompletion, and many other fantastic features. It’s like a graphical version of SQLAlchemy. If you haven’t tried it before, it’s well worth trying and you will find it outperforms most other tools.


Use the Cloud SQL Auth proxy with Docker

We can also start Cloud SQL Auth proxy with Docker, which is normally suitable when you don’t want to or cannot install cloud-sql-proxy.

It’s pretty straightforward to start the Cloud SQL Auth proxy with Docker, you just need to follow these commands:

docker pull gcr.io/cloud-sql-connectors/cloud-sql-proxy:2.0.0

docker run -d \
  -v /local/path/to/service-account-key.json:/container/path/to/service-account-key.json \
  -p 127.0.0.1:3306:3306 \
  gcr.io/cloud-sql-connectors/cloud-sql-proxy:2.0.0 \
  --address 0.0.0.0 --port 3306 \
  --credentials-file /container/path/to/service-account-key.json INSTANCE_CONNECTION_NAME

Note that -p 127.0.0.1:3306 specifies that the Cloud SQL Auth proxy is not exposed outside the local host, whereas --address 0.0.0.0 is used to make the port accessible outside of the Docker container.

It’s not very common to start the Cloud SQL Auth proxy with Docker directly. A more useful way is to use Docker Compose as we will see in the next section.


Use the Cloud SQL Auth proxy with Docker Compose

Normally we don’t use Docker to start a standalone container for the Cloud SQL Auth proxy. Instead, we specify it together with our application in a docker-compose.yaml file. In this way, our application can be dockerized properly and can be installed by all developers on their computers together with the database dependencies required.

We will see how to put a FastAPI microservice and Cloud SQL Auth proxy in the same docker-compose.yaml file.

But first, let’s see how to create a docker-compose.yaml file just for the Cloud SQL Auth proxy because there are some details that we should get clear with first. Our initial docker-compose.yaml file will have the content as follows:

version: "3.9"

services:
  cloudsql:
    image: gcr.io/cloud-sql-connectors/cloud-sql-proxy:2.0.0
    volumes:
      - type: bind
        source: ./service-account-key.json
        target: /service-account-key.json
        read_only: true
    ports:
      - target: 3306
        published: 13306
    command: --address 0.0.0.0 --port 3306 --credentials-file /service-account-key.json glass-core-xxxxxx:europe-west1:gs-mysql

As we see, the settings are the same as the ones above for using Docker directly. Note that for the command key, we should not specify the cloud-sql-proxy command as it’s specified inside the Docker container automatically.

Then we can bring up the cloudsql service by:

docker-compose up -d

Normally as a developer, we would have permission to access our databases with our personal Google accounts. If you have logged into GCP with gcloud auth login, you can just bind the ~/.config/gcloud folder to the container and don’t need to provide a service account key file:

version: "3.9"

services:
  cloudsql:
    image: gcr.io/cloud-sql-connectors/cloud-sql-proxy:2.0.0
    volumes:
      - type: bind
        source: ~/.config/gcloud
        target: /home/nonroot/.config/gcloud
        read_only: true
    ports:
      - target: 3306
        published: 13306
    command: --address 0.0.0.0 --port 3306 glass-core-xxxxxx:europe-west1:gs-mysql

This is the preferable way for local development because using a service account key file has the risk of leaking it and causing security issues.

However, if you bring up the cloudsql service with this docker-compose.yaml file directly, you will normally see this error:

open /home/nonroot/.config/gcloud/application_default_credentials.json: permission denied

The reason is that we are binding the ~/.config/gcloud folder from our local host to a folder in the container which will be used by the nonroot user. Therefore, we need to change the permissions for the ~/.config/gcloud folder so it can be accessed by the nonroot user in the container:

find ~/.config/gcloud/ -type d | xargs -I {} chmod 755 {}
find ~/.config/gcloud/ -type f | xargs -I {} chmod 644 {}

These two commands make the ~/.config/gcloud/ folder and its subfolders, as well as all the content, accessible to other users (non-owner or group users). Please check this post if you want to know more about these commands.

If you bring up the service again, everything should work properly.

Now let’s add our FastAPI application to the docker-compose.yaml file:

version: "3.9"

services:
  my-app:
    build:
      context: ./app
    image: my-app:latest
    ports:
      - target: 80
        published: 8080
    networks:
      - my-app
    volumes:
      - type: bind
        source: ./app
        target: /app
    env_file:
      - ./secrets.env
    environment:
      - PYTHONPATH=/app:.:..
      - DB_NAME=app
      - DB_HOST=cloudsql
      - DB_PORT=3306
    depends_on:
      - cloudsql

  cloudsql:
    image: gcr.io/cloud-sql-connectors/cloud-sql-proxy:2.0.0
    volumes:
      - type: bind
        source: ~/.config/gcloud
        target: /home/nonroot/.config/gcloud
        read_only: true
    ports:
      - target: 3306
        published: 13306
    networks:
      - my-app
    command: --address 0.0.0.0 --port 3306 glass-core-xxsxxx:europe-west1:gs-mysql

networks:
  my-app:
    name: my-app
    driver: bridge

Key points for this docker-compose.yaml file:

  1. The application and Cloud proxy services must have the same network. Otherwise, the database cannot be accessed in the application.
  2. Some non-sensitive environment variables are set as plain text by the environment key, and sensitive ones by the secret file secrets.env. This file should not be added to the repository for version control.
  3. The application is set to depend on the Cloud proxy so the database can always be accessible when the application is up.
  4. A special environment variable PYTHONPATH is set to /app:.:.. to make it easier to import the modules by relative or absolute paths.

We can use Pydantic to read the environment variables in our application, which is really convenient:

# app/db/db.py
from pydantic import BaseSettings, Field, SecretStr
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import scoped_session, sessionmaker, Session
from sqlalchemy.schema import MetaData

# Create a base for SQLAlchemy mappers.
Base = declarative_base(metadata=MetaData(schema="app"))
metadata = Base.metadata


# A Pydantic model to get environment variables.
class DbSettings(BaseSettings):
    """Settings for SQL."""

    host: str = Field(..., env="DB_HOST")
    user: str = Field(..., env="DB_USERNAME")
    password: SecretStr = Field(..., env="DB_PASSWORD")
    port: int = Field(env="DB_PORT", default=3306)
    db_name: str = Field(env="DB_NAME", default="app")


# We need to create an instance of the Pydantic model to access the
# environment variables.
db_settings = DbSettings()

db_conn_url = (
    "mysql+pymysql://"
    f"{db_settings.user}:{db_settings.password.get_secret_value()}"
    f"@{db_settings.host}:{db_settings.port}/{db_settings.db_name}"
)

# Create SQLAlchemy SQL engine and session factory.
engine = create_engine(db_conn_url)
session_factory = sessionmaker(bind=engine)
scoped_session_factory = scoped_session(session_factory)


def get_db_sess():
    """Get a SQLAlchemy ORM Session instance.

    Yields:
        A SQLAlchemy ORM Session instance.
    """
    db_session: Session = scoped_session_factory()
    try:
        yield db_session
    except Exception as exc:
        db_session.rollback()
        raise exc
    finally:
        db_session.close()

The above code can be used as a template for any Python microservice.

In our simple application, we can read our users’ data with a FastAPI dependency injection:

# app/main.py
from fastapi import Depends, FastAPI
from sqlalchemy.orm import Session

from app.db.db import get_db_sess
from app.db.models.users import User as UserModel
from app.schema.users import User as UserSchema

app = FastAPI()


@app.get("/users")
async def get_users(
    db_sess: Session = Depends(get_db_sess),
) -> list[UserSchema]:
    users = db_sess.query(UserModel).all()

    return users

This GitHub repo contains all the code for this post. It is recommended to check the related posts for Pydantic, SQLAlchemy, and FastAPI if you want to dive into the code. These are all very popular libraries for Python developers and are well worth being added to your toolset.


In this post, we have introduced how to use Cloud SQL Auth proxy on a bare metal machine or virtual machine directly. Different authenticating methods are introduced, including using default authentication and a service account key file.

We have also introduced how to use Docker to start the Cloud SQL Auth proxy directly. However, it’s not recommended as it’s not portable and is thus difficult to share the settings with other developers. A preferable way is to use adocker-compose.yaml file to connect our dockerized applications to Cloud SQL instances.

A simple but practical example of how to use the Cloud SQL Auth proxy in a real project is provided which can serve as a template for your own projects.


Related articles:



Leave a comment

Blog at WordPress.com.