Pollum Infra Report for Syscoin Community
Report Period: DEC-JAN
Report Date: 24nd January
Table of Contents
Executive Summary
The Pollum infrastructure was still without new features from December to January. During this time, the infrastructure just kept receiving small fixes.
Services
RPC L1/L2 Public Infra
Overview
-
Endpoints Breakdown:
- l1
- mainnet:
- https://rpc.syscoin.org
- wss://rpc.syscoin.org/wss
- testnet:
- https://rpc.tanenbaum.io
- wss://rpc.tanenbaum.io/wss
- tools:
- mainnet
- testnet
- mainnet:
- l2
- mainnet:
- https://rollux.rpc.syscoin.org
- wss://rollux.rpc.syscoin.org/wss
- testnet:
- https://rollux.rpc.tanenbaum.io
- wss://rollux.rpc.tanenbaum.io/wss
- tools:
- mainnet:
- l1
-
Status:
- https://rpc.syscoin.org - Operational
- was://rpc.syscoin.org/wss - Operational
- https://rpc.tanenbaum.io - Operational ( Running in a dedicated machine without Pollum’s proprietary infrastructure )
- was://rpc.tanenbaum.io/wss - Operational
- https://rollux.rpc.syscoin.org - Operational
- wss://rollux.rpc.syscoin.org/wss - Operational
- https://rollux.rpc.tanenbaum.io - Operational ( Running in a dedicated machine without Pollum’s proprietary infrastructure )
- wss://rollux.rpc.tanenbaum.io/wss - Operational ( Running in a dedicated machine without Pollum’s proprietary infrastructure )
- https://tools.rpc.syscoin.org/ - Operational
- https://tools.rollux.rpc.syscoin.org/ - Operational
- https://tools.rpc.tanenbaum.io/ - Operational
- https://tools.rollux.rpc.tanenbaum.io/ - Operational
-
Updates:
- There was a update to the rpcs infrastructure to give the server more reliability:
- To avoid the crashes in the websocket, a generic exception handler was added, closing the websocket session when something went wrong with the messages.
- There was a update to the rpcs infrastructure to give the server more reliability:
Short Description
The RPC nodes are set up using route53, global accelerator, application load balancers, and auto-scaling group, in 3 regions from AWS: Seoul, Frankfurt, and Virginia. Each region has its own auto-scaling group with one application load balancer. By default, the auto-scaling group runs only one Ec2, but it can be scaled up according to the need. The application load balancer concentrates the SSL and DNS for L1 and L2.
Finally, the global accelerator monitors the load balancers’ health to incoming requests according to the region. The route53 is only used for the DNS.
RPC L1/L2 Archive Infra
Overview
-
Endpoints Breakdown:
- L1
- 18.188.59.171:8545
- L2
- 13.59.22.26:8545
- L1
-
Status:
- 18.188.59.171:8545 - Operational
- 13.59.22.26:8545 - Operational
-
Updates:
- N/A
Short Description
The nodes are configured in two separate instances. The L1 node uses a custom container image builder from version v4.2.2 of sysnevm. The L2 nodes use the current automation provided by syscoin at the Rollux repository.
Blockbook
Overview
-
Endpoints Breakdown:
- UTXO Mainnet
- UTXO Testnet
-
Status:
- https://blockbook.elint.services/ - Operational
- https://blockbook-dev.elint.services/ - Operational
- Updates:–
Short Description
We reduced the service to just one node since the traffic has been slower. We just kept the load balancer in front of it in case it needs to be scaled
The Graph Nodes
Overview
-
Endpoints Breakdown:
- L1
- 18.224.43.215
- L2
- mainnet
- 18.117.235.62
- testnet
- mainnet
- L1
-
Status:
- 18.224.43.215 - Operational
- 18.117.235.62 - Operational
- rollux.graph.rpc.tanenbaum.io - Operational
-
Updates:
- Api Key being service implement for the Graphql nodes.
Short Description
All the graph nodes are running in dedicated machines. Both nodes for the mainnet are running in x86 architectures while the testnet node is running in Arm architecture aiming for lower cost.
There is no scalability dedicated infrastructure configured for the graph nodes. All of them are running only in single ec2 instances and are accessible by IP. Only the testnet the graph node has a DNS for partners.
As we progress in the testing (infra adjustments mostly) and stability of these nodes they will have open DNS for community usage
New Developments
There weren’t new features to the graph nodes during the last period. Although., there are plans to add backups and infrastructure automation to a certain level during the next periods.