Is there a decent WebUI or is the only other way cloud managed?
If you mean the remote server at OVH's datacenter, it's running CentOS 7 which I have root access to. I can also manage some other aspects such as mitigation on the IP address range, reverse DNS, IP WHOIS record, settings via their basic firewall if enabled, etc.
I've monitored the CPU usage and naturally it's barely noticeable on the remote server. It has a Intel Xeon D-1541 CPU 2.10GHz, which should be 8 cores and 16 threads I believe. I'm not sure where the bottleneck is currently, other than perhaps somewhere over the network itself. I'm wondering if I should possibly try a different type of tunnel, assuming that works with my setup, e.g. IPIP.
EDIT: I tried speedtest-cli on the OVH server and the result shows I could get nearer to 1Gbit by the looks of it, so it's just figuring out why GRE can't really break the 400Mbit barrier without showing a fairly significant increase in latency, and topping out around 570Mbit.
Selecting best server based on ping...
Hosted by toob Ltd (London) [343.04 km]: 4.93 ms
Testing download speed
Download: 1979.20 Mbit/s
Testing upload speed
Upload: 995.78 Mbit/s
EDIT: This is my latest config.
# aug/27/2021 11:01:50 by RouterOS 7.1rc1
# software id = [redacted]
#
# model = CCR2004-16G-2S+
# serial number = [redacted]
/interface lte
set [ find ] disabled=yes name=lte1
/interface ethernet
set [ find default-name=ether9 ] name=ether1
set [ find default-name=ether10 ] name=ether2
set [ find default-name=ether11 ] name=ether3
set [ find default-name=ether12 ] name=ether4
set [ find default-name=ether13 ] name=ether5
set [ find default-name=ether14 ] name=ether6
set [ find default-name=ether15 ] name=ether7
set [ find default-name=ether16 ] name=ether8
set [ find default-name=ether1 ] name=ether9
set [ find default-name=ether2 ] name=ether10
set [ find default-name=ether3 ] name=ether11
set [ find default-name=ether4 ] name=ether12
set [ find default-name=ether5 ] name=ether13
set [ find default-name=ether6 ] name=ether14
set [ find default-name=ether7 ] name=ether15
set [ find default-name=ether8 ] name=ether16
/interface pppoe-client
add add-default-route=yes disabled=no interface=ether1 keepalive-timeout=60 max-mru=1492 max-mtu=1492 name="PPPoE Cerberus" user=x
/interface gre
add !keepalive local-address=83.x.x.169 mtu=1468 name="OVH GRE Tunnel" remote-address=145.x.x.191
/interface list
add name=LAN
/interface wireless security-profiles
set [ find default=yes ] supplicant-identity=MikroTik
/ppp profile
add change-tcp-mss=yes name=Cerberus only-one=yes use-mpls=no
/queue type
add kind=sfq name=sfq
/queue simple
add bucket-size=0.001/0.001 comment="RX/TX are reversed" max-limit=0/100M name="PPPoE Cerberus Queue" queue=sfq/sfq target="PPPoE Cerberus"
add bucket-size=0.001/0.001 comment="RX/TX may also be reversed" max-limit=400M/400M name="OVH GRE Tunnel Queue" queue=sfq/sfq target="OVH GRE Tunnel"
/routing table
add disabled=no name=666
/ip settings
set allow-fast-path=no
/ip address
add address=192.168.1.1/24 comment=defconf interface=ether2 network=192.168.1.0
add address=198.x.x.1/24 comment="OVH GRE Tunnel" interface=ether3 network=198.x.x.0
/ip dns
set servers=1.1.1.1,1.0.0.1
/ip firewall address-list
add address=192.168.1.0/24 list=LAN
add address=198.x.x.0/24 list=OVH
/ip firewall filter
add action=accept chain=forward comment="Accept established and related" connection-state=established,related
add action=drop chain=forward comment="Drop invalid" connection-state=invalid
add action=fasttrack-connection chain=forward hw-offload=no
add action=accept chain=input comment="Accept established and related" connection-state=established,related
add action=drop chain=input comment="Drop invalid" connection-state=invalid
add action=accept chain=input comment="Accept ICMP" dst-limit=5,10,dst-address/1m40s in-interface=all-ppp limit=5,10:packet protocol=icmp
add action=accept chain=input comment="Accept GRE traffic from 145.x.x.191" in-interface=all-ppp protocol=gre src-address=145.x.x.191
add action=drop chain=input comment="Drop all traffic from PPP" in-interface=all-ppp
add action=drop chain=input comment="Drop TCP to 198.x.x.1 from WAN" dst-address=198.x.x.1 in-interface="OVH GRE Tunnel" protocol=tcp
add action=drop chain=input comment="Drop UDP to 198.x.x.1 from WAN" dst-address=198.x.x.1 in-interface="OVH GRE Tunnel" protocol=udp
add action=accept chain=input comment="Accept everything else"
/ip firewall nat
add action=masquerade chain=srcnat src-address=192.168.1.0/24
/ip firewall service-port
set ftp disabled=yes
set tftp disabled=yes
set irc disabled=yes
set sip disabled=yes
/ip route
add disabled=no distance=1 dst-address=0.0.0.0/0 gateway="OVH GRE Tunnel" pref-src="" routing-table=666 scope=30 suppress-hw-offload=no target-scope=10
add disabled=no dst-address=198.x.x.0/24 gateway=ether3 routing-table=666 suppress-hw-offload=no
add disabled=no dst-address=192.168.1.0/24 gateway=ether2 routing-table=666 suppress-hw-offload=no
/ip service
set telnet disabled=yes
set ftp disabled=yes
set www disabled=yes
set api disabled=yes
set api-ssl disabled=yes
/routing rule
add action=lookup disabled=no src-address=198.x.x.0/24
/system clock
set time-zone-name=Europe/London
/system ntp client
set enabled=yes
/system ntp client servers
add address=80.86.38.193
add address=143.210.16.201
add address=178.79.160.57
add address=217.114.59.3
add address=87.117.251.3
add address=109.237.17.140
add address=178.79.162.34
add address=188.39.98.165
I've upgraded to v7.1rc1 to see if it would make any difference, I didn't imagine it would and sadly I was right.
EDIT 2: Tried IPIP tunnel, slightly worse (520Mbit~). Back on GRE tunnel again. I have a feeling I may have to ask the MikroTik forum about this as I'm a little stumped. I'm happy with the speed if this is the most I can get (about 400Mbit before ping starts to significantly rise), just I'd like to know what the cause of the supposed bottleneck is. I guess I hate having an unsolved mystery
, which this is.