Not to bring up an old thread intentionally, but I figure this might be useful information for some people who may stumble across this thread.
I've recently changed my setup from using SoftEther VPN Server (L2TP), for which each IP address I wanted on my home network each had its own L2TP login/session and some trickery with the firewall. I came across a thread on another forum, typically for discussion of servers and such, which gave a handy guide on how to use a GRE tunnel to route a block of public IPv4 addresses to another location. The tutorial had a use case of tunneling DDoS protected public IPv4 addresses (as a subnet) from a server to another location which has a server running virtual machines. I slightly modified this so that my router could provide the IPv4 addresses from the block to one or more devices on my LAN instead.
Unfortunately, from what I can see, the FireBrick 2900 doesn't have the capability of setting up a GRE tunnel. So, I ended up using my EdgeRouter Pro 8. For some reason I was unable to configure the GRE tunnel and policy routing via the config tree, I could do so but it wouldn't work. However, manually running the commands in the tutorial via SSH, as well as running an additional command so I also had a route for my LAN IPv4 addresses, worked fine. I believe I've managed to persist these commands on power cycles of the router by putting a script in the post-config.d folder. Setting eth3 to be the first IPv4 address of the block was fine via the config tree.
For those curious or interested, the tutorial I came across is at
https://www.lowendtalk.com/discussion/156850/howto-tunnel-ddos-protected-ovh-ip-to-vms-in-other-datacenter/p1Assume 83.x.x.169 is the IPv4 address of my EdgeRouter and 51.x.x.62 is the IPv4 address of the server at the datacenter which has the block of public IPv4 addresses available. The mentioned block we'll also assume is 198.x.x.0/24.
I ran the following on the server at the datacenter:
/usr/sbin/ip tunnel add gre1 mode gre remote 83.x.x.169 local 51.x.x.62 ttl 255
/usr/sbin/ip link set gre1 up
/usr/sbin/ip route add 198.x.x.0/24 dev gre1
/usr/sbin/iptables -A FORWARD -p tcp -m tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
Then I ran the following on my EdgeRouter here, at home:
/sbin/ip tunnel add gre1 mode gre remote 51.x.x.62 local 83.x.x.169 ttl 255
/sbin/ip link set gre1 up
/sbin/ip rule add from 198.x.x.0/24 table 666
/sbin/ip route add default dev gre1 table 666
/sbin/ip route add 198.x.x.0/24 dev eth3 table 666
/sbin/ip route add 192.168.1.0/24 dev eth1 table 666
This must also be run on both sides:
sudo echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf
sudo sysctl -p
Finally, 198.x.x.1/24 was added as an IPv4 address to the eth3 interface on the EdgeRouter via the UI - which is the gateway.
A device on my home network can then do something like the following:
IP Address: 198.x.x.2
Subnet Mask: 255.255.255.0
Gateway IP Address: 198.x.x.1
The subnet range must be at least /30 or the tutorial won't work.
Obviously this is trickier to do if you don't have a static IPv4 address with your ISP or your router isn't capable of running a GRE tunnel. It can't be done behind NAT either.
Suffice to say I'm pleased with the outcome, as the L2TP setup I had before wasn't perfect and sometimes if for some reason my PPPoE connection dropped (that's extremely rare) then a few of these L2TP connections were sometimes oddly unable to get any internet access until I either reconnected the troublesome L2TP connections once or maybe twice, or at an extreme I would have to reboot the FireBrick. The benefit of this is I'm no longer relying on needing an ISP to sell a small block of IPv4 addresses and if I change to another ISP then these IPv4 addresses come with me, as I hope to do at some point very soon because of a local rollout.
EDIT: I was having issues with downloading some things every so often via any of the public IPv4 addresses, turned out I forgot about the MTU. If you're on a PPPoE connection make sure the MTU is set correctly (e.g. should be 1468 or lower on the GRE tunnel on both ends). For me I had initially got it set to a slightly higher MTU on the server's side while the MTU was correct on my Edgerouter's side. Was puzzling me for a little while.
EDIT 2: Forgot to include mention of net.ipv4.ip_forward needs to be '1' on both sides or it won't work at all.
EDIT 3: Added MSS clamping instruction, otherwise certain websites appear to struggle to load I've noticed (particular with SSL/TLS). Doing MSS clamping on the datacenter's server resolves that problem.