Provider Features π§°
This page details the various features available to providers in the Lava network, along with instructions on how to operate them:
- π° Rewards Tracking
- β‘ Caching
- π Addons and Extensions
- π§ Freeze and Unfreeze
- π Advanced Auth Configuration
- β©
ip-forwarding
Configuration - β
node-timeout
Configuration - βοΈ Load Balancer Configuration
- π Prometheus Metrics Configuration
π° Rewards Trackingβ
Once a provider is up and running, consumers will request relay services from the provider. Once the provider provides the service, they will be eligible to get rewards from the consumer they served. The provider can receive rewards when a consumer's monthly plan expires.
Providers can query the estimated amount they will receive with the query:
lavad q pairing provider-monthly-payout <lava-provider-address>
Claimable rewards can be queried with:
lavad q dualstaking delegator-rewards <lava-provider-address>
and can be claimed to the provider balance with:
lavad tx dualstaking claim-rewards --from <provider-key>
β‘ Cachingβ
Lava's caching service is used to cut costs and improve the overall performance of the network. Both provider and consumer processes benefit from the caching service. Providers who enable caching may be able to return responses faster than providers who do not have caching enabled.
In order to use the caching service, run the following process:
ListenAddress="127.0.0.1:7777"
ListenMetricsAddress="127.0.0.1:5747"
lavap cache $ListenAddress --metrics_address $ListenMetricsAddress --log_level debug
The cache service will run in the background. Plug in the caching service with the consumer or provider process as applicable:
rpcprovider
lavap rpcprovider <your-regular-cli-options> --cache-be $ListenAddress
rpcconsumer
lavap rpcconsumer <your-regular-cli-options> --cache-be $ListenAddress
π Addons and Extensionsβ
Addons and Extensions are services that can be offered in addition to the basic spec on a provider service. Addons are APIs exposed in addition to existing APIs, while Extensions are changes to existing API responses.
A few examples:
"archive"
- an extension providing valid responses for older blocks than the current pruning definition in the basic spec"debug"
- an addon that offers debug apis in addition to basic rpc calls
Why Addons and Extensions ββ
Servicing Addons and Extensions can be a good way to generate additional traffic to your endpoints and higher rewards.
β Additional Trafficβ
Consumers can easily use addons and extensions without any client configuration. They're included in the consumer subscription. The Lava Protocol automatically routes requests to providers that support desired services, which may result in increased traffic for those providers supporting specific addons, while providers without support won't receive such requests.
π± Higher Rewardsβ
Extensions can also provide a CU boost on the regular API, meaning modified calls may be more highly rewarded. "archive"
calls, for example, will have a big multiplier on CU for each API request.
Only API requests to the "archive"
endpoint will award these additional CUs, and Lava knows which calls are "archive"
or not according to the pruning of the regular spec defined by governance.
Set up Addons and Extensions βοΈβ
Addons and Extensions are configured both in the provider service config, and then staked for on chain. π‘ For simplicity's sake, both Addons and Extensions are defined using the addons:
field. For a reference, please see this example provider config file.
ποΈ Addon (Config File)β
In order to add an addon to the service the yaml must be configured with the addon command:
endpoints:
- api-interface: jsonrpc
chain-id: ETH1
network-address:
address: "127.0.0.1:2224"
node-urls:
- url: my-eth-node.com/eth-with-debug/ws
addons:
- debug
ποΈ Extension (Config File)β
Since extensions must offer consumers the regular spec function and the possibility of extended functionality, both must be present.
Therefore, extensions unlike addons, must be configured in a new url entry:
endpoints:
- api-interface: jsonrpc
chain-id: ETH1
network-address:
address: "127.0.0.1:2224"
node-urls:
- url: my-eth-node.com/eth/ws/ # must keep this line
- url: my-eth-node.com/eth-with-archive/ws
addons:
- archive
Although this configuration offers you the chance to load balance different extension calls, if you run only a single archive node and do not want to automatically load balance archive calls to a pruned node, you can set both urls to point to the the archive node:
endpoints:
- api-interface: jsonrpc
chain-id: ETH1
network-address:
address: "127.0.0.1:2224"
node-urls:
- url: my-eth-node.com/eth-with-archive/ws
- url: my-eth-node.com/eth-with-archive/ws
addons:
- archive
ποΈ Multiple Extensions (Config File)β
Additional extensions must be defined with all possible combinations, for example compliance + archive will look like this:
endpoints:
- api-interface: jsonrpc
chain-id: ETH1
network-address:
address: "127.0.0.1:2224"
node-urls:
- url: my-eth-node.com/eth/ws/
- url: my-eth-node.com/eth-with-archive/ws
addons:
- archive
- url: my-eth-node.com/eth-with-compliance/ws
addons:
- compliance
- url: my-eth-node.com/eth-with-compliance-and-archive/ws
addons:
- compliance
- archive
ποΈ Combination (Config File)β
A combination of an extension and addons will look like this:
endpoints:
- api-interface: jsonrpc
chain-id: ETH1
network-address:
address: "127.0.0.1:2224"
node-urls:
- url: my-eth-node.com/eth/ws/archive
addons:
- archive
- url: my-eth-node.com/eth-with-debug/ws
addons:
- debug
βοΈ Stakingβ
Before staking, make sure your process works correctly. If addons or extensions fail to verify, the entire service for that spec and api interface will fail. Please use the lavap test rpcprovider
command to verify that your setup is correct.
Staking with an addon or an extensions is very similar to the normal staking command. Simply modify it by adding the list of addons and extensions separated by comma:
Staking Example: stake-provider
β¨οΈ
Ethereum Mainnet in US with archive and debug
lavap tx pairing stake-provider "ETH1" \
"50000000000ulava" \
"provider-host.com:443,USC,archive,debug" USC \
--from "my_account_name" \
--provider-moniker "your-moniker" \
--keyring-backend "test" \
--chain-id {CHAIN_ID} \
--gas="auto" \
--gas-adjustment "1.5" \
--node {PUBLIC_RPC} \
--delegation-limit 100000000000ulava
The delegation-limit
flag is mandatory (but will be removed in future versions). The delegation limit is the maximum amount of delegations the provider is willing to use for the pairing process.
Larger delegations mean the provider will be paired more often with consumers. If a provider receives many delegations but can't handle the resulting consumer traffic, they can set the delegate limit lower than their actual total delegations. This reduces their pairing frequency and workload. The delegated tokens remain with the provider, but their influence is reduced by artificially lowering the provider's effective stake (composed of self-stake and delegations).
Putting "0ulava" means there is a strict restriction that no amount of delegations affect the pairing mechanism.
Do note the required addition of ,archive,debug
in each of the endpoints that support it if several exist. Setting these in a transaction replaces any existing endpoints, so make sure to give the full list of endpoints.
Also, the geolocation specified in the endpoints must match the geolocation argument (in the example, the endpoint is setup in USC and the example geolocation argument's value is USC, as expected).
Finally, using the optional --provider
flag allows defining another lava address as the provider's operational address. The address of the --from
address will be considered as the provider's vault address.
The vault address is utilized to hold the provider entity's funds and to receive rewards from the provider entity's service. Any other actions performed by the provider entity utilize the provider entity's provider
address. The provider address can perform all actions except for staking/unstaking, modifying stake-related fields in the provider entity's stake entry, and claiming rewards.
To let the provider address use the vault's funds for gas fees, use the --grant-provider-gas-fees-auth
. The only transactions that are funded by the vault are: relay-payment
, freeze
, unfreeze
, modify-provider
,
detection
(conflict module), conflict-vote-commit
and conflict-vote-reveal
. When executing any of these transactions using the CLI with the provider entity, use the --fee-granter
flag to specify the vault address
which will pay for the gas fees. It's important to note that once an provider address is registered through a provider entity's staking, it cannot stake on the same chain again.
Staking Example: modify-provider
β¨οΈ
It is also possible to add these to an existing entry with the modify-provider
command:
lavap tx pairing modify-provider "ETH1" --endpoints "provider-host.com:443,USC,archive,debug" --geolocation "USC" ...
π§ Freeze and Unfreezeβ
The freeze
command allows a provider to freeze its service, effective next epoch. This enables providers to pause their services without the impact of a bad QoS rating. While frozen, the provider won't be paired with consumers. To unfreeze, the provider must use the unfreeze
transaction, effective next epoch. This feature can be useful in cases like a provider needing to halt its services during maintenance.
Usage π¨β
βοΈ Freeze:β
# required flags: --from alice. optional flags: --reason
lavap tx pairing freeze [chain-ids] --from <provider_address>
lavap tx pairing freeze [chain-ids] --from <provider_address> --reason <freeze_reason>
lavap tx pairing freeze ETH1,COS3 --from alice --reason "maintenance"
The freeze
command requires the --from
flag to specify the provider address. Optionally, you can provide a --reason
flag to give a reason for the freeze.
π‘οΈ Unfreeze:β
# required flags: --from alice
lavap tx pairing unfreeze [chain-ids] --from <provider_address>
lavap tx pairing unfreeze ETH1,COS3 --from alice
The unfreeze
command also requires the --from
flag to specify the provider address.
π Advanced Auth Configurationβ
In this example, COS3 tendermint URLs are using client authentication, assuming the node URL is capable of processing this authentication.
Auth using HTTP headers πβ
The following RPCProvider Config Example demonstrated authentication using the "auth-headers" option:
endpoints:
- api-interface: tendermintrpc
chain-id: COS3
network-address:
address: 127.0.0.1:2221
node-urls:
- url: ws://127.0.0.1:26657/websocket
auth-config:
auth-headers:
WANTED_HEADER_NAME_1: xyz
- url: http://127.0.0.1:26657
auth-config:
auth-headers:strings.Join(goodChains, "; ")
Authorization: 'Bearer xxyyzz'
Auth using Query Paramsσ πβ
The following RPCProvider Config Example demonstrated authentication using the "auth-query" option:
endpoints:
- api-interface: tendermintrpc
chain-id: COS3
network-address:
address: 127.0.0.1:2221
node-urls:
- url: ws://127.0.0.1:26657/websocket
auth-config:
auth-query: auth=xxyyzz
- url: http://127.0.0.1:26657
auth-config:
auth-query: auth=xyz
gRPC TLS configuration πβ
If you want to add TLS authentication to your gRPC endpoint you have 3 options:
1. Dynamic certificateβ
endpoints:
- api-interface: grpc
chain-id: LAV1
network-address:
address: 127.0.0.1:2221
node-urls:
- url: 127.0.0.1:9090
auth-config:
use-tls: true
2. Provide a certificate and a key:β
endpoints:
- api-interface: grpc
chain-id: LAV1
network-address:
address: 127.0.0.1:2221
node-urls:
- url: 127.0.0.1:9090
auth-config:
use-tls: true
key-pem: /home/user/key.pem
cert-pem: /home/user/cert.pem
3. Provide a root certificate:β
endpoints:
- api-interface: grpc
chain-id: LAV1
network-address:
address: 127.0.0.1:2221
node-urls:
- url: 127.0.0.1:9090
auth-config:
use-tls: true
cacert-pem: /home/user/cert.pem
β© ip-forwarding
Configurationβ
If you want to IP load balance / throttle this is also supported by adding ip-forwarding: true
.
The IP will be added to the following header: X-Forwarded-For
endpoints:
- api-interface: jsonrpc
chain-id: ETH1
network-address:
address: 127.0.0.1:2221
node-urls:
- url: ws://your_node_url/
ip-forwarding: true
β node-timeout
Configurationβ
Overwriting the timeout time can result in inferior QoS for consumers.
If your node is too far from the rpcprovider
or responds too slowly, and you still want your provider process to start without troubleshooting, you can overwrite the timeouts with custom values using a flag in the node-urls configuration:
endpoints:
- api-interface: jsonrpc
chain-id: ETH1
network-address:
address: 127.0.0.1:2221
node-urls:
- url: ws://your_node_url/
timeout: 1s
βοΈ Load Balancer Configurationβ
Running multiple nodes with a load balancer can have multiple setups:
- Run a Provider on each node - provider processes can coexist, if you loadbalance grpc before the process of the provider and run a provider service on each node machine (close proximity)
- Run one Provider service and Loadbalance the nodes - in this case all the nodes are used by one provider service, this setup is more likely to trigger consistency issues between the provider service and the nodes
Setting up Stickyness support for Loadbalanced nodes π―β
If you've set up the second option, meaning one provider service per multiple nodes, it is required to provide stickyness across the nodes by a header. The reason for this being the cryptographic proofs a provider signs, must be consistent and can't have the blocks progress backwards
In order to support stickyness headers lava adds by default a header called X-Node-Sticky
, this header adds a consumer token consisting of several factors and are unique per consumer usage
Changing the Stickyness Header πβ
In order to support existing load balancer configs, it is possible to change the header name with a configuration in the config:
endpoints:
- api-interface: jsonrpc
chain-id: ETH1
network-address:
address: 127.0.0.1:2221
node-urls:
- url: ws://your_node_url/
sticky-header: <your-sticky-header-name>
π Prometheus Metrics Configurationβ
Adding support for prometheus is a simple change. Set the metrics-listening-address:
below.
Please note that all Lava analytics start with lava_
. Change the config below:
endpoints:
- api-interface: jsonrpc
chain-id: ETH1
network-address:
address: 127.0.0.1:2221
node-urls:
- url: ws://your_node_url/
metrics-listen-address: ":7780"