Pollum's Infra Reports[ OCT - NOV]

#Pollum Infra Report for Syscoin Community

#Report Period: OCT - NOV

#Report Date: 4 December

#Table of Contents

  1. Executive Summary
  2. Services
  1. New Developments

#Executive Summary

The priority for the month was to create an API key system to rate limit the requests to the RPC and GraphQL nodes.

Initially, the API Gateway from AWS and Cloudflare was studied as options for the system, but due to the price, they were discarded. The rate limit and API key were then, implemented over the eth-proxy, using the same codebase. The current progress from API Key is in the test phase.

The API key aims to reduce the infra cost in the future. The cost has already decreased since September and we expect the API key to help us to reduce more.

#Services

#RPC L1/L2 Public Infra

#Overview

#Short Description

The RPC nodes are set up using route53, global accelerator, application load balancers, and auto-scaling group, in 3 regions from AWS: Seoul, Frankfurt, and Virginia. Each region has its own auto-scaling group with one application load balancer. By default, the auto-scaling group runs only one Ec2, but it can be scaled up according to the need. The application load balancer concentrates the SSL and DNS for L1 and L2.

Finally, the global accelerator monitors the load balancers’ health to incoming requests according to the region. The route53 is only used for the DNS.

#RPC L1/L2 Archive Infra

#Overview

  • Endpoints Breakdown:
    • L1
      • 18.188.59.171:8545
    • L2
      • 13.59.22.26:8545
  • Status:
    • 18.188.59.171:8545 - Operational
    • 13.59.22.26:8545 - Operational
  • Updates:
    • N/A

#Short Description

The nodes are configured in two separate instances. The L1 node uses a custom container image builder from version v4.2.2 of sysnevm. The L2 nodes use the current automation provided by syscoin at the Rollux repository.

#Blockbook

#Overview

#Short Description

We reduced the service to just one node since the traffic has been slower. We just kept the load balancer in front of it in case it needs to be scaled

#The Graph Nodes

#Overview

#Short Description

All the graph nodes are running in dedicated machines. Both nodes for the mainnet are running in x86 architectures while the testnet node is running in Arm architecture aiming for lower cost.

There is no scalability dedicated infrastructure configured for the graph nodes. All of them are running only in single ec2 instances and are accessible by IP. Only the testnet the graph node has a DNS for partners.

As we progress in the testing (infra adjustments mostly) and stability of these nodes they will have open DNS for community usage

#New Developments

The development was focused on creating an Api Key system for the RPC and nevm nodes. The API key system is being developed in the eth-proxy, and currently uses DynamoDB for data persistence. For now, the API key is in test sysnevm nodes and is being implemented changes also to use the same service in the graphql nodes.