#Pollum Infra Report for Syscoin Community
#Report Period: OCT - NOV
#Report Date: 4 December
#Table of Contents
#Executive Summary
The priority for the month was to create an API key system to rate limit the requests to the RPC and GraphQL nodes.
Initially, the API Gateway from AWS and Cloudflare was studied as options for the system, but due to the price, they were discarded. The rate limit and API key were then, implemented over the eth-proxy, using the same codebase. The current progress from API Key is in the test phase.
The API key aims to reduce the infra cost in the future. The cost has already decreased since September and we expect the API key to help us to reduce more.
#Services
#RPC L1/L2 Public Infra
#Overview
-
Endpoints Breakdown:
- l1
- mainnet:
- https://rpc.syscoin.org
- wss://rpc.syscoin.org/wss
- testnet:
- https://rpc.tanenbaum.io
- wss://rpc.tanenbaum.io/wss
- tools:
- mainnet
- testnet
- mainnet:
- l2
- mainnet:
- https://rollux.rpc.syscoin.org
- wss://rollux.rpc.syscoin.org/wss
- testnet:
- https://rollux.rpc.tanenbaum.io
- wss://rollux.rpc.tanenbaum.io/wss
- tools:
- mainnet:
- l1
-
Status:
- https://rpc.syscoin.org - Operational
- was://rpc.syscoin.org/wss - Operational
- https://rpc.tanenbaum.io - Operational ( Running in a dedicated machine without Pollum’s proprietary infrastructure )
- was://rpc.tanenbaum.io/wss - Operational
- https://rollux.rpc.syscoin.org - Operational
- wss://rollux.rpc.syscoin.org/wss - Operational
- https://rollux.rpc.tanenbaum.io - Operational ( Running in a dedicated machine without Pollum’s proprietary infrastructure )
- wss://rollux.rpc.tanenbaum.io/wss - Operational ( Running in a dedicated machine without Pollum’s proprietary infrastructure )
- https://tools.rpc.syscoin.org/ - Operational
- https://tools.rollux.rpc.syscoin.org/ - Operational
- https://tools.rpc.tanenbaum.io/ - Operational
- https://tools.rollux.rpc.tanenbaum.io/ - Operational
-
Updates:
- The API Key and rate limit system are in the test phase. Running in testnet with private access
- The nodes for tanenbaum (L1) were reduced to one running in the same machine as L2.
- Greate cost reduction due to the changes from the last 2 months. The new cache system and the new infrastructure for the RPC cluster made it possible to reduce the overall price of infrastructure using less computational resources without losing performance.
#Short Description
The RPC nodes are set up using route53, global accelerator, application load balancers, and auto-scaling group, in 3 regions from AWS: Seoul, Frankfurt, and Virginia. Each region has its own auto-scaling group with one application load balancer. By default, the auto-scaling group runs only one Ec2, but it can be scaled up according to the need. The application load balancer concentrates the SSL and DNS for L1 and L2.
Finally, the global accelerator monitors the load balancers’ health to incoming requests according to the region. The route53 is only used for the DNS.
#RPC L1/L2 Archive Infra
#Overview
-
Endpoints Breakdown:
- L1
- 18.188.59.171:8545
- L2
- 13.59.22.26:8545
- L1
-
Status:
- 18.188.59.171:8545 - Operational
- 13.59.22.26:8545 - Operational
-
Updates:
- N/A
#Short Description
The nodes are configured in two separate instances. The L1 node uses a custom container image builder from version v4.2.2 of sysnevm. The L2 nodes use the current automation provided by syscoin at the Rollux repository.
#Blockbook
#Overview
-
Endpoints Breakdown:
- UTXO Mainnet
- UTXO Testnet
-
Status:
- https://blockbook.elint.services/ - Operational
- https://blockbook-dev.elint.services/ - Operational
- Updates:–
#Short Description
We reduced the service to just one node since the traffic has been slower. We just kept the load balancer in front of it in case it needs to be scaled
#The Graph Nodes
#Overview
-
Endpoints Breakdown:
- L1
- 18.224.43.215
- L2
- mainnet
- 18.117.235.62
- testnet
- mainnet
- L1
-
Status:
- 18.224.43.215 - Operational
- 18.117.235.62 - Operational
- rollux.graph.rpc.tanenbaum.io - Operational
-
Updates:
- Api Key being service implement for the Graphql nodes.
#Short Description
All the graph nodes are running in dedicated machines. Both nodes for the mainnet are running in x86 architectures while the testnet node is running in Arm architecture aiming for lower cost.
There is no scalability dedicated infrastructure configured for the graph nodes. All of them are running only in single ec2 instances and are accessible by IP. Only the testnet the graph node has a DNS for partners.
As we progress in the testing (infra adjustments mostly) and stability of these nodes they will have open DNS for community usage
#New Developments
The development was focused on creating an Api Key system for the RPC and nevm nodes. The API key system is being developed in the eth-proxy, and currently uses DynamoDB for data persistence. For now, the API key is in test sysnevm nodes and is being implemented changes also to use the same service in the graphql nodes.