Service accounts and ACLs
Gateway service accounts and authentication
Gateway service accounts are tightly coupled to the Authentication method you choose to connect your clients to Gateway.
The objective of authentication, whatever the method used, is to provide Conduktor Gateway with a Service Account name to be used in Kafka, and optionally a set of Groups and a Virtual Cluster to associate to the Service Account.
There are 3 ways to authenticate users with the Gateway:
- Delegating authentication to the backing cluster (Confluent Cloud API Keys for instance)
- Using an External source of authentication such as OAuth/OIDC, mTLS, LDAP
- Using Gateway Local Service Accounts
Service Accounts, authentication and authorization
Find out how Conduktor Gateway manages Service Accounts, client authentication and ACLs (Access Control Lists):
- where to authenticate clients: Gateway or Kafka
- which authentication method to use
- when to use local or external Service Accounts
- how and where to manage Service Account ACLs
A Service Account is a non-human identity, used by Kafka clients to authenticate and perform actions on resources through Gateway, depending on their ACLs.
Local and external service accounts
In Gateway, you can define two types of service accounts:
-
Local: an identity created and managed using the Gateway admin API with no external dependency. Gateway provides the means to generate a Service Account password with a configurable time-to-live. These credentials can then be shared with clients for authentication. as such this is only supported in non-delegated mode. This type of service account is useful when you don't want to depend on an external identity provider. For example, when sharing data with an external partner and you don't want to manage a separate identity provider.
-
External: an identity managed externally, such as by an OIDC identity provider or an mTLS certificate. Optionally, this identity can be given a more user-friendly name in Gateway. For instance, if Azure is used as the OIDC provider, the application principal recognized by Gateway defaults to a UUID generated by Azure. To enhance readability, this principal can be mapped to a Gateway external service account, letting you use a friendly name when declaring ACLs, Interceptors and audit logs within Gateway.
Each service account is stored in an internal topic, _conduktor_${GATEWAY_CLUSTER_ID}_usermappings
, and includes a name (used when applying ACLs and interceptors) and an associated virtual cluster. By default, this is set to passthrough
.
Find out what client authentication methods are supported.
Local service accounts
The local service accounts are useful if you want to manage the clients credentials directly within Conduktor Gateway. You can easily create, update, and delete them directly from Gateway's Admin API.
Learn how to manage a local service account.
External service accounts
For external service accounts, the clients credentials are created and handled by a third-party identity provider (OIDC, mTLS). However, you might want to:
- refer to them using a friendly name in Gateway
- associate them to a virtual cluster
In these scenarios, you should create an external service account in Gateway, and link it to the principal given by your identity provider.
This external service account will be the one used in Gateway to apply ACLs and interceptors, and will be logged in the Gateway Audit Log internal topic.
Learn how to manage an external service account here.
Client authentication methods
Gateway supports different client authentication methods, depending on whether you're using local or external service accounts.
GATEWAY_SECURITY_PROTOCOL | Local SA | External SA |
---|---|---|
Anonymous | ||
PLAINTEXT | ❌ | ❌ |
SSL | ❌ | ❌ |
SSL with client auth (mTLS) | ||
SSL | ❌ | ✅ |
SASL | ||
SASL_PLAINTEXT | ✅ | only if OAUTHBEARER |
SASL_SSL | ✅ | only if OAUTHBEARER |
Delegated SASL | ||
DELEGATED_SASL_PLAINTEXT | ❌ | ✅ |
DELEGATED_SASL_SSL | ❌ | ✅ |
Anonymous authentication
PLAINTEXT
In the case of PLAINTEXT authentication, the client is anonymous and doesn't need any credentials. This means that authentication can't take place and so local and external service accounts are not supported in this mode.
How to configure the client > Gateway connection with PLAINTEXT.
SSL (encryption only)
If you use SSL for encryption only, Gateway presents a keystore certificate trusted by the client’s truststore, without authenticating the client. Therefore, local and external service accounts are not supported in this mode.
How to configure the client > Gateway connection with SSL.
SSL with client authentication (mTLS)
With mutual TLS (mTLS) authentication, both Kafka clients and Gateway validate each other’s identities using TLS certificates, ensuring secure and trusted bidirectional communication. This means that both the client and Gateway authenticate one another through their respective certificates.
As a result, Gateway extracts the user identity from the TLS certificate, which can be mapped to an external Service Account in Gateway.
The username will be in this format: CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown
. You can change it by setting the GATEWAY_SSL_PRINCIPAL_MAPPING_RULES
environment variable to a custom rule. By default, it extracts the certificate distinguished name.
For instance: GATEWAY_SSL_PRINCIPAL_MAPPING_RULES=RULE:^CN=([a-zA-Z0-9.-]*).*$$/$$1/ , DEFAULT
will extract the CN part of the certificate.
How to configure the client > Gateway connection with SSL mTLS.
SASL authentication
With SASL authentication using OAUTHBEARER, clients authenticate with an identity (the sub
in the OIDC JWT token) which can be mapped to an external Service Account in Gateway.
If you have configured OAUTHBEARER, Gateway expects the client to provide a JWT token, and the grant type should be clientcredentials
.
See how to manage Gateway Service Accounts using SASL_PLAINTEXT.
It's the same for both SASL_PLAINTEXT and SASL_SSL. The only difference is that SASL_SSL encrypts the communication, while SASL_PLAINTEXT transmits in plain text.
How to configure the client > Gateway connection with SASL.
With Conduktor Gateway, you can pick where you'd like the client authentication and authorization to be made. You can either:
-
use delegated SASL authentication - Gateway forwards the client credentials to the Kafka cluster to authenticate them and retrieve their ACLs or
-
handle this with Gateway using the supported client authentication methods - Gateway authenticates the clients and manages their ACLs.
Delegated SASL authentication
In some cases, you might want to delegate the authentication to the backing Kafka cluster. For example, if you want to gradually transition to using the Conduktor Gateway, but first want to continue using the ACLs and Service Accounts defined in your Kafka cluster.
In this case, Gateway can forward the client credentials to the backing Kafka cluster, and the Kafka cluster will authenticate the client for Gateway.
Currently, delegated SASL authentication only supports PLAIN, SCRAM, OAUTHBEARER and AWS_MSK_IAM mechanisms. Get in touch for specific requirements.
In this mode, the clients authorization (ACLs) is also handled by the backing Kafka cluster. Any calls to the Kafka ACLs API made on Gateway will be forwarded to the backing Kafka cluster.
As a result, local service accounts are not available on Gateway but thr external ones are available and can be mapped to the client principal. This way Gateway will apply its interceptors on this external service account with a user-friendly name.
Virtual clusters, alias topics and concentrated topics are not available in the Delegated SASL mode.
How to configure the client > Gateway connection with DELEGATED_SASL.
Authorization management (ACLs)
In delegated mode
Authorization (ACLs) is handled by the backing Kafka cluster. Any calls to the Kafka ACLs API made on Gateway will be forwarded to the backing Kafka cluster.
In non-delegated mode
Authorization (ACLs) is managed by Gateway, so you have to define the ACLs for your applications in Gateway. This is done using the Kafka ACLs commands or the Conduktor Console UI.
The principal attached to the ACLs can either be the local or the external service account name.
If the client connects to Gateway using OAUTHBEARER, but no external Service Account is defined, the sub
of the JWT client token will be used as the principal.
Manage service accounts using Gateway
In this how-to guide, you will learn how to manage service accounts on the Gateway. We will cover the creation of both local and external service accounts, and how to assign ACLs to them.
In this scenario:
- The SASL_PLAINTEXT security protocol is used for the communication between Clients > Gateway > Kafka
- The ACLs are enabled on the passthrough virtual cluster (
GATEWAY_ACL_ENABLED: true
) - The ACLs super-user is named
GATEWAY_SUPER_USERS: local-acl-admin
- The Gateway API admin credentials are the defaults
We will use the Gateway API to create and manage service accounts, but the following guide works with the CLI as well.
For local deployments, the Gateway API documentation is available at http://localhost:8888
. In this guide, we will use the service-account
and the token
endpoints.
In the service-account
section of the Gateway API documentation, you'll notice that to create a service account on the Gateway, you have to chose between a local
or external
service account.
A local
service account is managed by the Gateway itself, while an external
service account is managed by an external OIDC identity provider.
For more information refer to the Service Accounts, Authentication & Authorization concept page if you have not done so already.
Prerequisites
Start the local deployment
You can find below the docker-compose file to start a local Gateway with the configuration described above.
- Start the environment
- docker-compose.yaml
docker compose -f docker-compose.yaml up -d
services:
conduktor-gateway:
image: conduktor/conduktor-gateway:3.3.0
hostname: conduktor-gateway
container_name: conduktor-gateway
ports:
- 8888:8888
- 6969-6971:6969-6971
environment:
# Gateway > Kafka connection
KAFKA_BOOTSTRAP_SERVERS: kafka-1:9092,kafka-2:9092,kafka-3:9092
KAFKA_SASL_MECHANISM: PLAIN
KAFKA_SECURITY_PROTOCOL: SASL_PLAINTEXT
KAFKA_SASL_JAAS_CONFIG: org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret";
# Clients > Gateway connection
GATEWAY_SECURITY_PROTOCOL: SASL_PLAINTEXT
GATEWAY_ADVERTISED_HOST: localhost # Considering your clients are running on your machine, outside of the Docker network
# GATEWAY_OAUTH_JWKS_URL: "TO_FILL"
# GATEWAY_OAUTH_EXPECTED_ISSUER: "TO_FILL"
# GATEWAY_OAUTH_EXPECTED_AUDIENCES: "TO_FILL"
# Gateway configuration
GATEWAY_MIN_BROKERID: 1
# Enable ACLs on the passthrough virtual cluster, with the super user
GATEWAY_ACL_ENABLED: true
GATEWAY_SUPER_USERS: local-acl-admin
healthcheck:
test: curl localhost:8888/health || exit 1
start_period: 10s
interval: 5s
retries: 25
depends_on:
kafka-1: { condition: service_healthy }
kafka-2: { condition: service_healthy }
kafka-3: { condition: service_healthy }
zookeeper:
image: confluentinc/cp-zookeeper
container_name: zookeeper
hostname: zookeeper
ports:
- 12181:2181
environment:
ZOOKEEPER_CLIENT_PORT: 2181
healthcheck:
test: echo srvr | nc zookeeper 2181 || exit 1
retries: 20
interval: 10s
kafka-1:
image: confluentinc/cp-kafka:7.7.0
container_name: kafka-1
hostname: kafka-1
ports:
- 19092:19092
environment:
KAFKA_LISTENERS: "INTERNAL://kafka-1:9092,EXTERNAL://:19092"
KAFKA_ADVERTISED_LISTENERS: "INTERNAL://kafka-1:9092,EXTERNAL://localhost:19092"
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: "INTERNAL:SASL_PLAINTEXT,EXTERNAL:PLAINTEXT"
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
KAFKA_BROKER_ID: 1
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
KAFKA_LISTENER_NAME_INTERNAL_SASL_ENABLED_MECHANISMS: PLAIN
KAFKA_LISTENER_NAME_INTERNAL_PLAIN_SASL_JAAS_CONFIG: org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret" user_admin="admin-secret" ;
healthcheck:
test: nc -zv kafka-1 9092 || exit 1
interval: 10s
retries: 25
start_period: 20s
depends_on:
zookeeper: { condition: service_healthy }
kafka-2:
image: confluentinc/cp-kafka:7.7.0
container_name: kafka-2
hostname: kafka-2
ports:
- 19093:19093
environment:
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
KAFKA_LISTENERS: "INTERNAL://kafka-2:9092,EXTERNAL://:19093"
KAFKA_ADVERTISED_LISTENERS: "INTERNAL://kafka-2:9092,EXTERNAL://localhost:19093"
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: "INTERNAL:SASL_PLAINTEXT,EXTERNAL:PLAINTEXT"
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_BROKER_ID: 2
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
KAFKA_LISTENER_NAME_INTERNAL_SASL_ENABLED_MECHANISMS: PLAIN
KAFKA_LISTENER_NAME_INTERNAL_PLAIN_SASL_JAAS_CONFIG: org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret" user_admin="admin-secret" ;
healthcheck:
test: nc -zv kafka-2 9092 || exit 1
interval: 10s
retries: 25
start_period: 20s
depends_on:
zookeeper: { condition: service_healthy }
kafka-3:
image: confluentinc/cp-kafka:7.7.0
container_name: kafka-3
hostname: kafka-3
ports:
- 19094:19094
environment:
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
KAFKA_LISTENERS: "INTERNAL://kafka-3:9092,EXTERNAL://:19094"
KAFKA_ADVERTISED_LISTENERS: "INTERNAL://kafka-3:9092,EXTERNAL://localhost:19094"
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: "INTERNAL:SASL_PLAINTEXT,EXTERNAL:PLAINTEXT"
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "false"
KAFKA_BROKER_ID: 3
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
KAFKA_LISTENER_NAME_INTERNAL_SASL_ENABLED_MECHANISMS: PLAIN
KAFKA_LISTENER_NAME_INTERNAL_PLAIN_SASL_JAAS_CONFIG: org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret" user_admin="admin-secret" ;
healthcheck:
test: nc -zv kafka-3 9092 || exit 1
interval: 10s
retries: 25
start_period: 20s
depends_on:
zookeeper: { condition: service_healthy }
Export Java security manager config (optional)
Depending on your version of Java you may need to run the below command in your shell session. Newer versions of Java have dropped support for security manager and current versions of Kafka CLI commands will fail without this being set. If you get errors when running the later commands with authentication, run this command.
export KAFKA_OPTS="-Djava.security.manager=allow"
Create a few topics on Kafka
Let's create a few topics on Kafka by running the following command from your local machine:
kafka-topics --create --bootstrap-server localhost:19092 --topic finance-data
kafka-topics --create --bootstrap-server localhost:19092 --topic finance-report
Manage a Local Service Account
Create a Local Service Account
A local service account is managed by the Gateway itself. This means we have to ask the Gateway to create it for us, by giving it a name.
The first step is to create a reference to this new service account. In our case, we will call this service account local-app-finance-dev
, and we want it to exist in the passthrough
virtual cluster:
curl \
--request PUT \
--url 'http://localhost:8888/gateway/v2/service-account' \
--header 'Authorization: Basic YWRtaW46Y29uZHVrdG9y' \
--header 'Content-Type: application/json' \
--data-raw '{
"kind" : "GatewayServiceAccount",
"apiVersion" : "gateway/v2",
"metadata" : {
"name" : "local-app-finance-dev",
"vCluster" : "passthrough"
},
"spec" : { "type" : "LOCAL" }
}'
Then, we need to get the secret key of this service account, that has a limited lifetime:
curl \
--request POST \
--url 'http://localhost:8888/gateway/v2/token' \
--header 'Authorization: Basic YWRtaW46Y29uZHVrdG9y' \
--header 'Content-Type: application/json' \
--data-raw '{
"vCluster": "passthrough",
"username": "local-app-finance-dev",
"lifeTimeSeconds": 3600000
}'
This will return a JSON object with the token
field containing the secret key.
{
"token": "eyJhbGciOiJIUzI1NiJ9.eyJ1c2VybmFtZSI6ImxvY2FsLWFwcC1maW5hbmNlLWRldiIsInZjbHVzdGVyIjoicGFzc3Rocm91Z2giLCJleHAiOjE3MzIwOTUzNjN9.-rivmwcI-zvTTqLPeO_0l3xUALz5mKtopp1YMaTswFk"
}
This means that, we can now connect to the Gateway passthrough virtual cluster using the local-app-finance-dev
service account and its secret key. Let's do so.
Connect to the Gateway with a Local Service Account
Create a properties file, local-client.properties
with the credentials we just generated to connect to the Gateway:
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="local-app-finance-dev" password="eyJhbGciOiJIUzI1NiJ9.eyJ1c2VybmFtZSI6ImxvY2FsLWFwcC1maW5hbmNlLWRldiIsInZjbHVzdGVyIjoicGFzc3Rocm91Z2giLCJleHAiOjE3MzIwOTUzNjN9.-rivmwcI-zvTTqLPeO_0l3xUALz5mKtopp1YMaTswFk";
List topics using the Kafka CLI, authenticating using our service account:
kafka-topics --list --bootstrap-server localhost:6969 --command-config local-client.properties
In this case, the command doesn't return anything because we have enabled ACLs on this passthrough virtual cluster (GATEWAY_ACL_ENABLED: true
). It means that my local service account doesn't have the right permissions to see any resources, it's not authorized. Let's modify the ACLs so this service account can list topics.
Create ACLs for a Local Service Account
Create an ACL Admin Local Service Account
In order to modify the ACLs, we recommend you define a dedicated ACL admin service account.
This is a privileged service account and must be defined in the Gateway configuration using the environment variable GATEWAY_SUPER_USERS
, in the case of the passthrough
Virtual Cluster. In our example, we have named this local-acl-admin
.
Repeat the steps, as before, using the name local-acl-admin
. Create the service account, get its credentials, save them to file.
- Create the service account
- Get its credentials
curl \
--request PUT \
--url 'http://localhost:8888/gateway/v2/service-account' \
--header 'Authorization: Basic YWRtaW46Y29uZHVrdG9y' \
--header 'Content-Type: application/json' \
--data-raw '{
"kind" : "GatewayServiceAccount",
"apiVersion" : "gateway/v2",
"metadata" : {
"name" : "local-acl-admin",
"vCluster" : "passthrough"
},
"spec" : { "type" : "LOCAL" }
}'
curl \
--request POST \
--url 'http://localhost:8888/gateway/v2/token' \
--header 'Authorization: Basic YWRtaW46Y29uZHVrdG9y' \
--header 'Content-Type: application/json' \
--data-raw '{
"vCluster": "passthrough",
"username": "local-acl-admin",
"lifeTimeSeconds": 3600000
}'
Store the generated credentials in a new file, local-acl-admin.properties
.
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="local-acl-admin" password="eyJhbGciOiJIUzI1NiJ9.eyJ1c2VybmFtZSI6ImxvY2FsLWFjbC1hZG1pbiIsInZjbHVzdGVyIjoicGFzc3Rocm91Z2giLCJleHAiOjE3MzIxNjEwOTB9.m8U_DVv4MTOY9mKiKY2tHeUGjxsUvhC9ssE6iAI3eJc";
As this user is an ACLs admin, it has access to all the Gateway topics and can create & modify ACLs for the other service accounts.
Create ACLs for another Local Service Account, using the ACL Admin Service Account
In order for the local-app-finance-dev
service account to be able to interact with its topics, we need to give it the WRITE
permission on its prefix. Run the following command to do so:
kafka-acls --bootstrap-server localhost:6969 \
--command-config local-acl-admin.properties \
--add \
--allow-principal User:local-app-finance-dev \
--operation write \
--topic finance- \
--resource-pattern-type prefixed
Adding ACLs for resource `ResourcePattern(resourceType=TOPIC, name=finance-, patternType=PREFIXED)`:
(principal=User:local-app-finance-dev, host=*, operation=WRITE, permissionType=ALLOW)
Current ACLs for resource `ResourcePattern(resourceType=TOPIC, name=finance-, patternType=PREFIXED)`:
(principal=User:local-app-finance-dev, host=*, operation=WRITE, permissionType=ALLOW)
Finally, let's list the topics using the local-app-finance-dev
service account:
kafka-topics --list --bootstrap-server localhost:6969 --command-config local-client.properties
finance-data
finance-report
Manage an External Service Account
An external service account is managed by an external OIDC identity provider. This means we only have to make the Gateway aware of this external service account by giving it its OIDC principal (this is the externalNames
). The credentials that will be used by this application are already defined in the OIDC identity provider.
To follow these steps on your machine, you will need to have connected an OAUTHBEARER provider in the config of the docker compose you are using, otherwise please use as reference.
In order to create this external service account reference on the Gateway, you can run the following command to:
- Create a Gateway service account
- Names
azure-app-billing-dev
- Which is recognized by it's OIDC principal (
"externalNames" : [ "TO_FILL" ]
)
curl \
--request PUT \
--url 'http://localhost:8888/gateway/v2/service-account?dryMode=false' \
--header 'Authorization: Basic YWRtaW46Y29uZHVrdG9y' \
--header 'Content-Type: application/json' \
--data-raw '{
"kind" : "GatewayServiceAccount",
"apiVersion" : "gateway/v2",
"metadata" : {
"name" : "azure-app-billing-dev",
"vCluster" : "passthrough"
},
"spec" : {
"type" : "EXTERNAL",
"externalNames" : [ "TO_FILL" ]
}
}'
Now you can apply some interceptors to this service account, by referring to the service account name, azure-app-billing-dev
.
Connect to the Gateway with an External Service Account
You can now connect to the Gateway using the azure-app-billing-dev
service account.
Here is the type of properties file you may use to connect to the Gateway using OAUTHBEARER:
security.protocol=SASL_PLAINTEXT
sasl.mechanism=OAUTHBEARER
sasl.login.callback.handler.class=org.apache.kafka.common.security.oauthbearer.secured.OAuthBearerLoginCallbackHandler
sasl.oauthbearer.token.endpoint.url="TO_FILL"
sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required clientId="TO_FILL" clientSecret="TO_FILL" scope=".default";
And here is an example of using the Kafka CLI to list the topics, using this service account:
kafka-topics --list --bootstrap-server localhost:6969 --command-config external-client.properties
In this case, the command wouldn't return anything because we have enabled the ACLs on this passthrough virtual cluster (GATEWAY_ACL_ENABLED: true
). It means that my local service account doesn't have the right permissions to see any resources, it's not authorized. The next step is then to give it some ACLs so it can list topics.
Create ACLs for an External Service Account
The steps here are exactly the same as the ones for the local service account. Please follow them but using the azure-app-billing-dev
service account instead of local-app-finance-dev
.
kafka-acls --bootstrap-server localhost:6969 \
--command-config local-acl-admin.properties \
--add \
--allow-principal User:azure-app-billing-dev \
--operation write \
--topic finance- \
--resource-pattern-type prefixed
Differences if using Virtual Clusters
The example above is using a default passthrough
Virtual Cluster. If you are using your own Virtual Clusters, you need to make a few changes.
First, let's see how to create a Virtual Cluster with the ACLs enabled, and a super user declared. Then, we'll see how to create the super user credentials, in order to give permissions to the applications service account.
Create the Virtual Cluster with an ACL Admin
The below creates a Virtual Cluster called my-vcluster
, that will have ACLs enabled, and a super user named local-acl-admin
.
curl \
--request PUT \
--url 'http://localhost:8888/gateway/v2/virtual-cluster?dryMode=false' \
--header 'Authorization: Basic YWRtaW46Y29uZHVrdG9y' \
--header 'Content-Type: application/json' \
--data-raw '{
"kind" : "VirtualCluster",
"apiVersion" : "gateway/v2",
"metadata" : { "name" : "my-vcluster" },
"spec" : {
"aclEnabled" : true,
"superUsers" : [ "local-acl-admin" ]
}
}'
Service Account Creation in a Virtual Cluster
Now that the Virtual Cluster my-vcluster
exists, create the local Service Account for the super user:
curl \
--request PUT \
--url 'http://localhost:8888/gateway/v2/service-account' \
--header 'Authorization: Basic YWRtaW46Y29uZHVrdG9y' \
--header 'Content-Type: application/json' \
--data-raw '{
"kind" : "GatewayServiceAccount",
"apiVersion" : "gateway/v2",
"metadata" : {
"name" : "local-acl-admin",
"vCluster" : "my-vcluster"
},
"spec" : { "type" : "LOCAL" }
}'
Finally, get its secret key:
curl \
--request POST \
--url 'http://localhost:8888/gateway/v2/token' \
--header 'Authorization: Basic YWRtaW46Y29uZHVrdG9y' \
--header 'Content-Type: application/json' \
--data-raw '{
"vCluster": "my-vcluster",
"username": "local-acl-admin",
"lifeTimeSeconds": 3600000
}'
Note that the same modification applies for external Service Accounts.
Now you can create a properties file, local-acl-admin.properties
using the credentials you just generated. Refer to the previous sections for creating ACLs for local & external Service Accounts.