Get started with Google Cloud Platform #2

In this second part, we will familiarize with Google Cloud command line utility (gcloud) and see how we can manage a bunch of GCP resources through that CLI.

Get started with Google Cloud Platform #2
Photo by Reza Rostampisheh / Unsplash

Using gcloud documentation

gcloud -h
gcloud --help

# Get documentation about a specifig gcloud command group
gcloud help <group>
# or 
gcloud <group> --help

# Ex: get usage help for the gcloud 'config' command group
gcloud help config
# or
gcloud config --help

# Get documentation for a specific topic
gcloud topic <topic>

# Ex: get documentation about filtering gcloud commands outputs
gcloud topic filters
gcloud topic formats

Filtering gcloud commands outputs

# Get external IP of the VM named myvm:
gcloud compute instances list --filter=name:myvm --format='value(EXTERNAL_IP)')

# List firewall rules from the default VPC network that allows ICMP
gcloud compute firewall-rules list --filter="NETWORK:'default' AND ALLOW:'icmp'"`

# List compute images matching 'keyword' and limit output to 10
gcloud compute images list --filter="name ~ keyword" --limit 10`

Getting project info

# Get info about the project config (zone, region, project ID...)
gcloud config list [--all]

# Get other info about the project (quotas...)
gcloud compute project-info describe --project $(gcloud config get-value project)

Authentication and project setup

# Login with a user account
gcloud auth login <user_email>

# Login as a service account
gcloud auth activate-service-account --key-file=<service_account_json_key_file>

# Set current project to work with

# Set default compute region and zone
gcloud config set compute/region $region
gcloud config set compute/zone $zone

# Show currently set region and zone values
gcloud config get-value compute/region
gcloud config get-value compute/zone

Granting IAM roles

# Grant bigquery data viewer role to a service account
gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
    --member serviceAccount:${service_account_email} --role roles/bigquery.user 

# Grant bigquery user role to a service account
gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
    --member serviceAccount:${service_account_email} --role roles/bigquery.dataViewer

# Granting custom roles to service accounts
gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
    --member serviceAccount:${service_account_email} --role organizations/${ORGANIZATION_ID}/roles/${custom_role_name}

Managing IAM roles

  • Getting roles info
# List predefined roles (don't specify the project ID)
gcloud iam roles list

# List custom roles
gcloud iam roles list --project $GCP_PROJECT_ID [--show-deleted]

# Get metadata about a specific role, including the role's permissions
gcloud iam roles describe $role_name

# List bigquery roles containing viewer in the name
gcloud iam list-grantable-roles //cloudresourcemanager.googleapis.com/projects/$GCP_PROJECT_ID --filter="name ~ bigquery viewer"

# List roles that can be applied for specific resources
gcloud iam list-grantable-roles //cloudresourcemanager.googleapis.com/projects/$GCP_PROJECT_ID

# List permissions that can be applied for specific resources
gcloud iam list-testable-permissions //cloudresourcemanager.googleapis.com/projects/$GCP_PROJECT_ID
  • Creating custom roles
# Roles yaml file structure
# LAUNCH_STAGE could be ALPHA, BETA or GA
title: [ROLE_TITLE]
description: [ROLE_DESCRIPTION]
stage: [LAUNCH_STAGE]
includedPermissions:
- [PERMISSION_1]
- [PERMISSION_2]

# Create a custom role using a yaml file
gcloud iam roles create $role_name --project $GCP_PROJECT_ID \
  --file role-definition.yaml

# Create a custom role using gcloud only
gcloud iam roles create $role_name --project $GCP_PROJECT_ID \
    --title "Role title" --description "Role description" \
    --permissions compute.instances.get,compute.instances.list --stage ALPHA
  • Updating custom roles
# Update a custom role
# Create a Yaml file of the existing the role
gcloud iam roles describe $role_name --project $GCP_PROJECT_ID > role.yaml 

# Edit the role.yml file fields as needed (don't modify the etag)

# The etag ensures that the role we are updating has not already been
# updated by someone else

# For the role to be updated, the etag of the current role version 
# should match the one inside the Yaml file

# Each time a role is updated, the etag changes

# Update the role using the role.yml file
gcloud iam roles update $role_name --project $GCP_PROJECT_ID \
  --file role.yaml

# Update a custom role using gcloud only

# Add permissions
gcloud iam roles update $role_name --project $GCP_PROJECT_ID \
  --add-permissions storage.buckets.get,storage.buckets.list

# Remove permissions
gcloud iam roles update rolename --project $GCP_PROJECT_ID \
  --remove-permissions storage.buckets.list
  • Making custom roles unusable
# Disable a custom role
gcloud iam roles update $role_name --project $GCP_PROJECT_ID --stage DISABLED

# Delete a custom role
gcloud iam roles delete $role_name --project $GCP_PROJECT_ID

# After that, role links are still present but disabled
# The role can still be restored within 7 days
# After the 7 days, permanent deletion process is launched and last 30 days
# The role ID will be available again after 37 days

# Restore custom roles (only within 7 days after the deletion)
gcloud iam roles undelete $role_name --project $GCP_PROJECT_ID

# Deleted roles will have the 'deleted: true' and 'stage: DISABLED'
# in their descriptions

Managing service accounts

# Create a service account
gcloud iam service-accounts create $service_account_name

Managing network

VPC and subnets

# Create a VPC
# $subnet_mode can be:
# 'auto' for automatic subnets creation in all regions or 
# 'custom' for manual subnets creation
gcloud compute networks create $network_name --subnet-mode=$subnet_mode`

# Create a subnet
gcloud compute networks subnet create $subnet_name \
  --network=$network_name \
  --range=$subnet_range \
  --region=$subnet_region

# List, update, delete networks or subnets
gcloud compute networks [list|update|delete|--help] ...
gcloud compute networks subnet [list|update|delete|--help] ...

Firewall rules

# Create a firewall rule
gcloud compute firewall-rules create $firewall_rule_name \
  --project=$gcp_project_id \
  --network=$network_name \
  --description="Description of the rule" \
  --direction=INGRESS \ # or EGRESS (for outbound traffic)
  --priority=1000 \ # rule priority, smallest priority are evaluated first
  --source-ranges=x.x.x.x/xx \ # multiple ranges separated by comma can be specified. Use 0.0.0.0/0 to match all adresses ranges
  --action=ALLOW \ # or DENY
  --rules=tcp:22,tcp:3389,icmp # comma separated list of protocols or protocol:port
  [--source-service-accounts] \ # identify the flow source by service accounts. Flow from/to computes using those service accounts will match
  [--source-tags] \ # identify the flow source by network tags. Flow from/to computes using those tags will match
  [--target-service-accounts] \ # identify the flow target by service accounts. Flow from/to computes using those service accounts will match
  [--target-tags] \ identify the flow target by network tags. Flow from/to computes using those tags will match

# List, update, delete firewall rules
gcloud compute firewall-rules [list|update|delete|--help] ...

# List default VPC firewall rules
gcloud compute firewall-rules list --filter="network='default'"
NAME: default-allow-icmp
NETWORK: default
DIRECTION: INGRESS
PRIORITY: 65534
ALLOW: icmp
DENY: 
DISABLED: False

NAME: default-allow-internal
NETWORK: default
DIRECTION: INGRESS
PRIORITY: 65534
ALLOW: tcp:0-65535,udp:0-65535,icmp
DENY: 
DISABLED: False

NAME: default-allow-rdp
NETWORK: default
DIRECTION: INGRESS
PRIORITY: 65534
ALLOW: tcp:3389
DENY: 
DISABLED: False

NAME: default-allow-ssh
NETWORK: default
DIRECTION: INGRESS
PRIORITY: 65534
ALLOW: tcp:22
DENY: 
DISABLED: False

VPC Peering between network-a (in project-a) and network-b (in project-b)

  • Create a peer-a-to-b in VPC network peering, in project-a:
    • select the network-a as source
    • indicate the network-b, in project-b as destination
    • create the peer-a-to-b, the peering status will be in waiting state
  • Create a peer-b-to-a in VPC network peering, in project-b:
    • select the network-b as source
    • indicate the network-a, in project-a as destination
    • create the peer-b-to-a, the peering status will be in active state on both sides
  • Peering is properly configured
  • To see new routes to peered VPCs:
    • gcloud compute routes list --project $PROJECT_ID

Managing virtual machines

  • By default:
    • compute instances will get a public IP
    • ingress network traffic are denied
  • For info about machines types have a look at gcp-machines-resources
  • For info about available regions and zones have a look at gcp-datacenters-locations
# Make sure a compute instance is ready:
gcloud compute instances get-serial-port-output $instance_name

# Set default compute region and zone to use
gcloud config set compute/region $region
gcloud config set compute/zone $zone

# Show currently set region and zone values
gcloud config get-value compute/region
gcloud config get-value compute/zone

# List public cloud compute instances images available for use
gcloud compute images list --filter="name ~ keyword" --limit 10`

# Get info about all command line options 
# for creating virtual machines
gcloud compute instances --help

# For quick creation commands examples, 
# search for 'EXAMPLES' in the help doc

# Create a virtual machine
$ gcloud compute instances create $instance_name \
--machine-type $instance_type \
--zone $zone \
--tags=$network-tags \
--image-family=debian-11 \
--image-project=debian-cloud \
--metadata=startup-script='#!/bin/bash
      apt-get update
      apt-get install mypackage -y
      another command here'

# SSH into a virtual machine
$ gcloud compute ssh $instance_name
  • For Windows machines:

    • Change the boot disk to use a Windows machine image
    • Set RDP (Remote Desktop Protocol) credentials for 'admin' user:
    gcloud compute reset-windows-password $instance_name --zone $zone --user admin
    
    • Connect to the Windows VM using RDP:
      • Use your favorite RDP client depending on your OS
      • If you are using Windows, an RDP client is already installed:
        • Start Menu => search for RDP
      • If you are using OSX, the CoRD package could be used
  • For managing VMs that do not have public IPs from outside the corporate internal network using SSH and RDP, Cloud IAP (Identity Aware Proxy) can be used:

# Enable Cloud IAP API
gcloud services enable iap.googleapis.com

# Create VMs
gcloud compute instances create linux-iap --no-address
gcloud compute instances create windows-iap --image-project=windows-cloud --image=windows-server-2016-dc-v20240516 --no-address
gcloud compute instances create windows-connectivity --image-project=$project --image=$custom_image_name

# Create ingress firewall rules for IAP on port 22 (SSH) and 3389 (RDP)
# When --target-tags or --target-service-accounts are not specified, the rule 
# appplies to all instances using subnets of the network (VPC)
gcloud compute firewall-rules create allow-ingress-from-iap \
 --project=$project --network=$network_name \
--description="Allows SSH and RDP for IAP" \
--direction=INGRESS \
--priority=1000 \
--source-ranges=35.235.240.0/20 \
--action=ALLOW \
--rules=tcp:22,tcp:3389

# In the Google Cloud Console Identity-Aware Proxy configuration (search for IAP), switch to the SSH and TCP Resources tab. 
# Select the target instances and 'Add principal' for the VM service account or user account email that should connect to 
# the VMs using IAP, and grant them the Cloud IAP > IAP-Secured Tunnel User (iap.tunnelResourceAccessor)

# Start local tunnel to windows VM RDP port or linux SSH port
# The local tunneled port will be printed and can then be used
# to connect to the remote machine using RDP
gcloud compute start-iap-tunnel windows-machine 3389 --local-host-port=localhost:0 --zone=europe-west4-b
gcloud compute start-iap-tunnel linux-machine 22 --local-host-port=localhost:0  --zone=europe-west4-b

# The IAP Desktop software can also be used to connect to machines
https://github.com/GoogleCloudPlatform/iap-desktop

# or use this to directly ssh into 
# the linux machines through the iap tunnel
gcloud computes ssh --tunnel-through-iap $machine_name

Managing Instance Groups + HTTP/HTTPS Load balancer

For a deep-dive on GCP Load Balancers, have a look at: Understanding GCP Load Balancers

  • instance-groups: can be used to manage a set of compute engine resources. Two types: managed, unmanaged. Expose a named port that can then be used by load balancers backend-services to target the instance group. GCP external/internal L7 load balancers can load balance trafic to instance groups resources.
  • health-checks: used by GCP external/internal L7 load balancers to verify the status of backend services resources before sending them traffic. Different types of verifications like tcp, http, http2, https, ssl, grpc can be performed, also on custom port number.
  • backend-service: used by GCP external/internal L7 load balancers for managing load balancing backends. Those backends could be instance groups, network endpoint groups... In this section, we will be using an instance group as the backend of a backend service. We specify the incoming request protocol to use (may vary depending on the type of load balancer), the port name for the exported named port of the instance group backends, the name of the health checks to use and the type of the backend service as global.
  • urls-maps: used to tell GCP external/internal L7 load balancers where (on which backend service / Cloud storage) to redirect a specific HTTP request. Ex: redirect https://example.com/video to a backend service and https://example.com/audio to another backend service or a Cloud Storage service.
  • target-http(s)-proxies: used by GCP external/internal L7 load balancers. Terminates HTTP or HTTPS connections arriving at the Load Balancer and create new ones toward the backends, originating from the Load Balancer. The use of the HTTPS (instead of HTTP) target proxy requires an associated SSL/TLS certificates for encrypting communications between the clients and the Load Balancer.
  • forwarding-rules: the frontend of GCP Loab Balancers. Defines the IP address of the Load Balancer, a protocol + one or more ports to listen on, and depending on the Load Balancer type, specify either a backend service, a target proxy or a target pool.
# Create instance template
gcloud compute instance-templates create $template_name \
   --region=us-east4 \
   --network=default \
   --subnet=default \
   --tags=allow-health-check \
   --machine-type=e2-medium \
   --image-family=debian-11 \
   --image-project=debian-cloud \
   [--metadata=startup-script-url=gs://BUCKET_NAME/FILE] \
   --metadata=startup-script='#!/bin/bash
     apt-get update
     apt-get install apache2 -y
     a2ensite default-ssl
     a2enmod ssl
     systemctl restart apache2'

# Create a Managed Instance Group
gcloud compute instance-groups managed create $instance_group_name --template=$template_name --size=2

# Another intance group creation command + autoscaling + named port
# --zones for multizones instance groups
# --zone for single zone instance groups

gcloud beta compute instance-groups managed create instance-group-1 --project=$project --base-instance-name=instance-group-1 --template=instance-template-1 --size=1 --zones=${region}-c --target-distribution-shape=EVEN --instance-redistribution-type=PROACTIVE --default-action-on-vm-failure=repair --no-force-update-on-repair --standby-policy-mode=manual --list-managed-instances-results=PAGELESS

gcloud beta compute instance-groups managed set-autoscaling instance-group-1 --project=$project --region=$region --mode=on --min-num-replicas=1 --max-num-replicas=5 --target-cpu-utilization=0.8 --cool-down-period=45
gcloud compute instance-groups set-named-ports instance-group-1 --project=$project --region=$region --named-ports=http:80

# Create firewall rule for health checks
gcloud compute firewall-rules create fw-allow-health-check \
  --network=default \
  --action=allow \
  --direction=ingress \
  --source-ranges=130.211.0.0/22,35.191.0.0/16 \
  --target-tags=allow-health-check \
  --rules=tcp:80

# Load balancer IP address
gcloud compute addresses create lb-ipv4-1 --ip-version=IPV4 --global
gcloud compute addresses describe lb-ipv4-1 --format="get(address)" --global

# Create health check
gcloud compute health-checks create http http-basic-check --port 80

# Create backend service
gcloud compute backend-services create web-backend-service --protocol=HTTP --port-name=http --health-checks=http-basic-check --global

# Add instance group as backend of the backend service
gcloud compute backend-services add-backend web-backend-service \
  --instance-group=lb-backend-group \
  --instance-group-zone=us-east4-c \
  --global

# Create URL map
gcloud compute url-maps create web-map-http --default-service web-backend-service

# Create target http proxy
gcloud compute target-http-proxies create http-lb-proxy --url-map web-map-http

# Create forwarding rule
gcloud compute forwarding-rules create http-content-rule \
   --address=lb-ipv4-1\
   --global \
   --target-http-proxy=http-lb-proxy \
   --ports=80

Managing TCP load balancers

# Web server computes
# www1 and www2
$ gcloud compute instances create www1 \
    --zone=Zone  \
    --tags=network-lb-tag \
    --machine-type=e2-small \
    --image-family=debian-11 \
    --image-project=debian-cloud \
    --metadata=startup-script='#!/bin/bash
      apt-get update
      apt-get install apache2 -y
      service apache2 restart
      echo "
<h3>Web Server: www1</h3>" | tee /var/www/html/index.html'

# Computes firewall rule to allow TCP traffic on port 80
$ gcloud compute firewall-rules create www-firewall-network-lb \
    --target-tags network-lb-tag --allow tcp:80

# LB external IP
$ gcloud compute addresses create network-lb-ip-1 --region Region

# Example for reserved private address from a specific subnet
$ gcloud compute addresses create my-ilb-ip --subnet=SUBNET --region Region --addresses=ADDRESS

# Health check resource
$ gcloud compute http-health-checks create basic-check

# Example for TCP health check
$ gcloud compute health-checks create tcp my-ilb-health-check --port=80 --port-name=http --region=$region

# Target pool (to have virtual machines as backend)
$ gcloud compute target-pools create www-pool \
  --region Region --http-health-check basic-check

# Add computes to target pool
$ gcloud compute target-pools add-instances www-pool \
    --instances www1,www2

# Instead of target pools, we can use backend services 
# to add instance groups as backends instead of virtual machines

# Create backend service
$ gcloud compute backend-services create my-backend-service \
  --region=$region --health-checks=my-ilb-health-check --health-checks-region=$region --load-balancing-scheme=INTERNAL

# Add instance groups to backend service
$ gcloud compute backend-services add-backend my-backend-service \
    --instance-group=my-instance-group \
    --instance-group-region=$region

# Forwarding rule (to target pool: VMs backend)
$ gcloud compute forwarding-rules create www-rule \
    --region Region \
    --ports 80 \
    --address network-lb-ip-1 \
    --target-pool www-pool

# Forwarding rule (to backend services: instance groups as backend)
$ gcloud compute forwarding-rules create www-rule \
    --ports=80 \
    --address=my-ilb-ip \
    --backend-service=my-backend-service \
    --address-region=$region \
    --load-balancing-scheme=INTERNAL \
    --network=my-internal-app \
    --subnet=subnet-b

# Get LB IP to reach VMs
$ IPADDRESS=$(gcloud compute forwarding-rules describe www-rule --region Region --format="json" | jq -r .IPAddress)

# Note:

# the name of the Load balancer in GCP is the name of the target pool or backend service
# the forwarding rule IP address makes the LB private or public
# by default (without specifying options to make the backend service and the forwarding rule load balancing scheme INTERNAL), the LB will be public

Managing Cloud Storage (GCS) buckets

# Create a GCS bucket
gsutil mb -p [PROJECT_ID] gs://$bucket_name

# List bucket content
gsutil ls [-l] gs://$bucket_name

# Copy a file to a GCS bucket
gsutil cp myfile gs://$bucket_name

# Download files from a GCS bucket
gsutil cp -r gs://$bucket_name/myfile $destination_path

# Make GCS buckets objects publicly accessible
gsutil acl ch -u AllUsers:R gs://$bucket_name/myfile

# Remove public access on a GCS buckets objects
gsutil acl ch -d AllUsers gs://$bucket_name/myfile

# Remove files from a GCS bucket
gsutil rm gs://$bucket_name/myfile

Managing containers

GKE standard cluster (VPC native + private nodes)

  • A CIDR /28 address range must be specify during private GKE cluster creation for the control plane / master nodes
  • IP alias should also be enabled in order to make nodes VMs use multiple IP addresses for different services on the same machine
  • After cluster creation, a default node pool is automatically created with 3 nodes
# Create cluster (automatic network + subnets creation)
gcloud beta container clusters create private-cluster \
    --enable-private-nodes \
    --master-ipv4-cidr 172.16.0.16/28 \
    --enable-ip-alias \
    --create-subnetwork ""

# See created network + subnets used by the GKE cluster
# Look for gke-private-cluster-subnet-xxxxxxxx
gcloud compute networks subnets list --network default

# See details about the subnet
# You should see a primary IP address range (used for nodes and loadbalancers)
# and two secondary IP addresses ranges (one for pods, one for services)
gcloud compute networks subnets describe [SUBNET_NAME] --region=$REGION

# The GKE cluster API server will have a public endpoint
# that is by default not widely accessible
# To allows IP address to communicate with the endpoint:
gcloud container clusters update private-cluster \
    --enable-master-authorized-networks \
    --master-authorized-networks [MY__EXTERNAL_RANGE]

# Create a network for a GKE cluster
gcloud compute networks subnets create my-subnet \
    --network default \
    --range 10.0.4.0/22 \
    --enable-private-ip-google-access \
    --region=$REGION \
    --secondary-range my-svc-range=10.0.32.0/20,my-pod-range=10.4.0.0/14 # Optional could be created automatically when creating a cluster using this subnet

# Create a GKE cluster using the preceding network
gcloud beta container clusters create private-cluster2 \
    --enable-private-nodes \
    --enable-ip-alias \
    --master-ipv4-cidr 172.16.0.32/28 \
    --subnetwork my-subnet \
    --network=default \
    --services-secondary-range-name my-svc-range \ # Optional, will be created automatically if not set
    --cluster-secondary-range-name my-pod-range \ # Optional, will be created automatically if not set
    --zone=$ZONE # for zonal clusters
    [--service-account=sa-email] # service account that will be attached to the cluster nodes
    [--enable-private-endpoint] # Make sure only private endpoint is available for cluster API server
    [--num-nodes] # Number of nodes
    [--scopes] # Oauth scopes

# Connection to cluster with kubectl
sudo apt-get install kubectl
sudo apt-get install google-cloud-sdk-gke-gcloud-auth-plugin

# Create kubectl config/credentials to talk to the cluster using kubectl
# Will use the cluster public endpoint by default
# use --internal-ip to use the private API endpoint in the config
gcloud container clusters get-credentials clustername --zone=$ZONE

Managing Cloud functions

Writting Google Cloud Functions tutorial | Google Pub/Sub architecture

There are two types of Cloud Functions:

  • HTTP functions that are triggered with an HTTP request
  • Event based functions that are triggered from events coming from other GCP services like Cloud Storage buckets, Pub/Sub...

Enable Cloud Functions APIs

# Enable Cloud function APIs
gcloud services enable \
  artifactregistry.googleapis.com \
  cloudfunctions.googleapis.com \
  cloudbuild.googleapis.com \
  eventarc.googleapis.com \
  run.googleapis.com \
  logging.googleapis.com \
  storage.googleapis.com \
  pubsub.googleapis.com

Create a GCS bucket event based gen2 function

  • Cloud Storage functions are based on Pub/Sub notifications from Cloud Storage, and support the finalize, delete, archive, and metadata update event types

  • We need to grant the Pub/Sub publisher role to the Cloud Storage service agent on our project. A service agent is a Google-managed service account that allows the service to access your resources

SERVICE_ACCOUNT=$(gcloud storage service-agent)
gcloud projects add-iam-policy-binding $PROJECT_ID --member serviceAccount:$SERVICE_ACCOUNT --role roles/pubsub.publisher
  • Function creation
# The function index.js

const functions = require('@google-cloud/functions-framework');

// Register a CloudEvent callback with the Functions Framework that will
// be triggered by Cloud Storage events.
functions.cloudEvent('checkTempData', cloudEvent => {
  console.log(`Event ID: ${cloudEvent.id}`);
  console.log(`Event Type: ${cloudEvent.type}`);

  const file = cloudEvent.data;
  console.log(`Bucket: ${file.bucket}`);
  console.log(`File: ${file.name}`);
  console.log(`Created: ${file.timeCreated}`);
});

# The function dependencies file

{
  "name": "temperature-data-checker",
  "version": "0.0.1",
  "main": "index.js",
  "dependencies": {
    "@google-cloud/functions-framework": "^2.1.0"
  }
}

# Cloud function creation

gcloud functions deploy temperature-data-checker \
 --gen2 \
 --runtime nodejs20 \
 --entry-point checkTempData \ # name of the main function inside the index.js file
 --source . \
 --region $REGION \
 --trigger-bucket $BUCKET \
 --trigger-location $BUCKET_REGION \
 --max-instances 1

 # you may got permissions error at firsts launchs. Just retry

Create a Pub/Sub event based function

# Function named helloWorld
# function code resides inside the functionbucket bucket
# function will be triggered once a message is published 
# inside the hello_world Pub/Sub topic
# function code will be run with nodejs20
gcloud functions deploy helloWorld \
  --stage-bucket functionbucket \
  --trigger-topic hello_world \
  --runtime nodejs20

# if you get an "OperationError" message, just rerun the command

Inspecting functions

# Get details about the deployed function
gcloud functions describe helloWorld

# Show function logs
gcloud functions logs read helloWorld

gcloud functions logs read gen_function \
 --region $REGION --gen2 --limit=100 --format "value(log)"

Managing logs

Have a look at GCP logging overview for more.

# See buckets used for storing logs of a project
gcloud logging buckets list

# List logs containers for a project
gcloud logging logs list

# Reading logs from any logs containers
gcloud logging read "LOG_FILTER"

# Exemples:

# Show activity logs for a specific pod (creation, deletion...)
gcloud logging read 'protoPayload.resourceName:mypod AND protoPayload.resourceNamespace:mynamespace AND logName:projects/mygcpproject/logs/cloudaudit.googleapis.com%2Factivity AND timestamp>="2024-05-14T06:05:54.238629139Z" AND timestamp<="2024-05-14T13:16:54.238629139Z"' --limit 1