Kitz ADSL Broadband Information
adsl spacer  
Support this site
Home Broadband ISPs Tech Routers Wiki Forum
 
     
   Compare ISP   Rate your ISP
 
Please login or register.

Login with username, password and session length
Advanced search  

News:

Pages: 1 ... 7 8 [9]

Author Topic: LAN setup  (Read 10497 times)

Chrysalis

  • Content Team
  • Kitizen
  • *
  • Posts: 4804
Re: LAN setup
« Reply #120 on: January 26, 2017, 09:15:45 AM »

I decided to check the raw smart data of the ssd, and considering I have used this pfsense unit for about a month I am surprised at how much has been written.

175 gig of data written on the ssd.

In comparison my desktop ssd, which hosts games and the OS, and I have used for over 2 years has 8.34tb of writes.

Bear in mind tho the ssd in my pfsense unit is only 60gig vs 512gig of my desktop ssd and its planar nand vs 3d nand in my pc.  So the endurance of the ssd in the pfsense unit is way less than whats in my pc.  both are MLC nand.

The largest dnsbl feed with 800k domains the file when downloaded is circa 100mbyte in size, this file is then converted to another format to be compatible with unbound, the converted file is 44mbytes in size.  So on every update is about 150mbyte of writes from that feed, the updates are once a day but I have probably done a few dozen manual updates on top of that when testing stuff.  Now I have zfs compression in play tho will continue monitoring and see if the rate of writes slows down (with 3 copies set it may well actually jump up), there is also the possibility this drive was not shipped unused to me (a return) as I didnt check before using it if the stats were all in a new state.
« Last Edit: January 26, 2017, 09:18:34 AM by Chrysalis »
Logged
Sky Fiber Pro - Billion 8800NL bridge & PFSense BOX running PFSense 2.4 - ECI Cab

Chrysalis

  • Content Team
  • Kitizen
  • *
  • Posts: 4804
Re: LAN setup
« Reply #121 on: January 26, 2017, 03:00:38 PM »

Been doing some more testing with the traffic shaper, whilst I find FAIRQ+codel superior for upstream traffic, it doesnt seem to be perfect for downstream and I am now currently using hfsc+codel which has showed better results in some quick tests using steam, dslreports and tbb speedtesting but will need time to see how it goes with days of internet use.

Code: [Select]
pfTop: Up Queue 1-14/14, View: queue, Cache: 10000                                                                            14:55:55

QUEUE                             BW SCH  PRIO     PKTS    BYTES   DROP_P   DROP_B QLEN BORROW SUSPEN     P/S     B/S
qInternet                        19M fair             0        0        0        0    0                     0       0
qACK                               0 fair    6   307812 19973104        0        0    0                     0       0
qDefault                           0 fair    4   248278  159647K        0        0    0                     3     207
qOthersHigh                        0 fair    5    13414  1717003        0        0    0                     0       0
qOthersLow                         0 fair    3   143551 82363014        0        0    0                     0       0
qICMP                              0 fair    7     1728   113444        0        0    0                     2     221
qBulk                              0 fair    2      649   256446        0        0    0                     0       0
root_igb1                        67M hfsc    0        0        0        0        0    0                     0       0
 qDefault                      3357K hfsc        220764  249884K        0        0    0                     2     357
 qICMP                         1342K hfsc           428    31672        0        0    0                     0       0
 qACK                          3357K hfsc         67140  3883408        0        0    0                     0       0
 qOthersHigh                     33M hfsc         13584  1638300        0        0    0                     0       0
 qOthersLow                      13M hfsc        193416  284731K       35    52990    0                     0       0
 qBulk                         6714K hfsc        179147  263734K     3894  5824621    0                     0       0

Steam for me remains the ultimate test, as that floods the line with dozens of tcp sessions (seems they designed it to get round poor isp's) and under normal circumstances it will cause packet loss on small packets like ssh packets and pings.  Steam testing has been more positive using hsfc than fairq and priq.
Logged
Sky Fiber Pro - Billion 8800NL bridge & PFSense BOX running PFSense 2.4 - ECI Cab

Chrysalis

  • Content Team
  • Kitizen
  • *
  • Posts: 4804
Re: LAN setup
« Reply #122 on: January 26, 2017, 05:34:38 PM »

found this gem, add /status.php to end of the ip url so e.g. https://192.168.1.252/status.php and you get a very nice informational page :)
Logged
Sky Fiber Pro - Billion 8800NL bridge & PFSense BOX running PFSense 2.4 - ECI Cab

adrianw

  • Member
  • **
  • Posts: 71
Re: LAN setup
« Reply #123 on: January 27, 2017, 12:29:35 AM »

found this gem, add /status.php to end of the ip url so e.g. https://192.168.1.252/status.php and you get a very nice informational page :)
Thanks!
I can see that being very useful at times.
Logged

Chrysalis

  • Content Team
  • Kitizen
  • *
  • Posts: 4804
Re: LAN setup
« Reply #124 on: January 30, 2017, 05:52:46 PM »

this looks pretty sweet for anyone considering pfsense.

http://www.asrock.com/ipc/overview.asp?Model=NAS-9601

Newer gen version of my CPU and the unit has 6 intel ports, and it has VGA out so no serial console stuff needed.
Logged
Sky Fiber Pro - Billion 8800NL bridge & PFSense BOX running PFSense 2.4 - ECI Cab

Ronski

  • Helpful
  • Kitizen
  • *
  • Posts: 2244
Re: LAN setup
« Reply #125 on: January 30, 2017, 07:28:05 PM »

Very nice, but seems it's not readily available.
Logged

Chunkers

  • Reg Member
  • ***
  • Posts: 302
  • Brick Wall head-banger
Re: LAN setup
« Reply #126 on: January 30, 2017, 11:17:51 PM »

this looks pretty sweet for anyone considering pfsense.

http://www.asrock.com/ipc/overview.asp?Model=NAS-9601

Newer gen version of my CPU and the unit has 6 intel ports, and it has VGA out so no serial console stuff needed.

That IS nice but
Very nice, but seems it's not readily available.
AKA does anyone sell it?

OTOH, bet its going to be expensive, plus I have an Asrock BeeBox N3000 and its a pile of unreliable crap.

Chunks
Logged

burakkucat

  • Global Moderator
  • Senior Kitizen
  • *
  • Posts: 19376
  • Over the Rainbow
    • The ELRepo Project
Re: LAN setup
« Reply #127 on: January 30, 2017, 11:29:15 PM »

this looks pretty sweet for anyone considering pfsense.

http://www.asrock.com/ipc/overview.asp?Model=NAS-9601

Newer gen version of my CPU and the unit has 6 intel ports, and it has VGA out so no serial console stuff needed.

I looked at the on-site images and could not see a VGA port . . . So I downloaded the data sheet. That mentions VGA under the "Graphics" heading but the section under the "Rear I/O" heading states "VGA 0"!

All a bit of a mystery.  :-\
Logged
:cat:  100% Linux and, previously, Unix. Co-founder of the ELRepo Project.

Please consider making a donation to support the running of this site.

Chrysalis

  • Content Team
  • Kitizen
  • *
  • Posts: 4804
Re: LAN setup
« Reply #128 on: January 31, 2017, 10:46:18 AM »

I have reduced zfs copies back to the default 1 on /tmp and /var and left at 3 for the rest of the system.

With this it is averaging 5 gig writes a day to the ssd, which I am pretty sure is mostly due to dnsbl feed processing. If we assume my ssd was at 0 bytes writes when installed, then I believe this to be a similar rate of writes as ufs.  This is because the ssd is a sandforce ssd which has native compression, on a normal ssd which doesnt compress I think zfs will reduce writes.

On pfSense /var is mostly storage for logs and dynamically generated files (including dnsbl files), so any corruption to these files should not have any persistent effects hence me changing it back, likewise /tmp is only housing short lived files.

I also changed globally logbias on zfs from latency to throughput which is more friendly to ssd's.

I am going to examine the pfblockerng code to see where it writes its files during the update process and see if some of the work can be moved to a ramdisk, I dont want to enable the built in pfsense ramdisk as it is by default as that makes alot of stuff non persistent such as logs and processed dnsbl files, meaning on a reboot logs are lost and pfblockerng has to download a new set of files due to the old ones been on volatile storage.  What I am aiming to do is have the original downloads and any in between processing files on ram storage and then only the completed dnsbl files on persistent storage.

/tmp also houses the pf rule set and a config.cache file which are frequently rewritten but both are only small files.

I also have been experimenting with different power states for the cpu.

Since I had another panic last week, I disabled some power saving functions in the bios which can cause instability, with the most important one I believe disabling DVFS on the ram (so ram stays at stock clocks and voltage throughout).  My current config now also has powerd set to not let the cpu drop below its stock speeds, but still allow it to ramp upto turbo speeds (so things like dnsbl updates are faster), this also removes low voltage states from the cpu which can cause instability.  This config adds probably about 3-5C to my average cpu temps.

So the experiment has been to let 2 cpu cores enter C2 state when idle (very low performance cost) and the other 2 cores to C3 (moderate performance cost when going from idle to load).  This actually did not achieve anything significant so I will be reverting all to C2, or even leaving at the default C1.

Here is some data, note core 0 seems to be always in a wake state doing something so is unaffected mostly by any changes.

cores 0,1 set to C2 cores 2,3 set to C3

Code: [Select]
dev.cpu.3.cx_method: C1/mwait/hwc C2/mwait/hwc C3/mwait/hwc
dev.cpu.3.cx_usage_counters: 1608486 118501 844080
dev.cpu.3.cx_usage: 62.56% 4.60% 32.82% last 33390us
dev.cpu.3.cx_lowest: C3
dev.cpu.3.cx_supported: C1/1/1 C2/2/500 C3/3/1000
dev.cpu.2.cx_method: C1/mwait/hwc C2/mwait/hwc C3/mwait/hwc
dev.cpu.2.cx_usage_counters: 287976 14077 383181
dev.cpu.2.cx_usage: 42.02% 2.05% 55.91% last 16161us
dev.cpu.2.cx_lowest: C3
dev.cpu.2.cx_supported: C1/1/1 C2/2/500 C3/3/1000
dev.cpu.1.cx_method: C1/mwait/hwc C2/mwait/hwc C3/mwait/hwc
dev.cpu.1.cx_usage_counters: 1925350 1301238 0
dev.cpu.1.cx_usage: 59.67% 40.32% 0.00% last 43us
dev.cpu.1.cx_lowest: C2
dev.cpu.1.cx_supported: C1/1/1 C2/2/500 C3/3/1000
dev.cpu.0.cx_method: C1/mwait/hwc C2/mwait/hwc C3/mwait/hwc
dev.cpu.0.cx_usage_counters: 39609431 170 0
dev.cpu.0.cx_usage: 99.99% 0.00% 0.00% last 102us
dev.cpu.0.cx_lowest: C2
dev.cpu.0.cx_supported: C1/1/1 C2/2/500 C3/3/1000

This shows core 0 99.99% of time is in C1 state.
Core 1 about 40% of time it manages to get in C2 state.
Cores 2,3 similar to core 1 except they drop to C3 instead of C2.

The temperatures for this config? well here it is.

Code: [Select]
dev.cpu.3.temperature: 38.0C
dev.cpu.2.temperature: 38.0C
dev.cpu.1.temperature: 37.0C
dev.cpu.0.temperature: 37.0C

Pretty much no benefit from either C2 or C3, C3 actually seems to make things worse as there is some work involved for the cpu to move between states and C3 takes much longer than C2 to move in and out of.

Here is data for this morning after it been idle and also no heating on.

Code: [Select]
root@PFSENSE tmp # sysctl dev.cpu |grep cx
dev.cpu.3.cx_method: C1/mwait/hwc C2/mwait/hwc C3/mwait/hwc
dev.cpu.3.cx_usage_counters: 2572229 196691 2064691
dev.cpu.3.cx_usage: 53.21% 4.06% 42.71% last 16775us
dev.cpu.3.cx_lowest: C3
dev.cpu.3.cx_supported: C1/1/1 C2/2/500 C3/3/1000
dev.cpu.2.cx_method: C1/mwait/hwc C2/mwait/hwc C3/mwait/hwc
dev.cpu.2.cx_usage_counters: 800620 38919 1055540
dev.cpu.2.cx_usage: 42.24% 2.05% 55.69% last 58977us
dev.cpu.2.cx_lowest: C3
dev.cpu.2.cx_supported: C1/1/1 C2/2/500 C3/3/1000
dev.cpu.1.cx_method: C1/mwait/hwc C2/mwait/hwc C3/mwait/hwc
dev.cpu.1.cx_usage_counters: 3184420 3416255 0
dev.cpu.1.cx_usage: 48.24% 51.75% 0.00% last 702us
dev.cpu.1.cx_lowest: C2
dev.cpu.1.cx_supported: C1/1/1 C2/2/500 C3/3/1000
dev.cpu.0.cx_method: C1/mwait/hwc C2/mwait/hwc C3/mwait/hwc
dev.cpu.0.cx_usage_counters: 112912517 423 0
dev.cpu.0.cx_usage: 99.99% 0.00% 0.00% last 252us
dev.cpu.0.cx_lowest: C2
dev.cpu.0.cx_supported: C1/1/1 C2/2/500 C3/3/1000
root@PFSENSE tmp # sysctl dev.cpu |grep temperature
dev.cpu.3.temperature: 36.0C
dev.cpu.2.temperature: 36.0C
dev.cpu.1.temperature: 35.0C
dev.cpu.0.temperature: 35.0C

Everytime i check the pattern is fairly reliable that cores 2 and 3 have higher temps than core 1, core 0 can get higher sometimes.  The power savings from these mode in terms of raw watts are also very low, as the cpu itself is only rated at 6 watts, so doesnt use much power to begin with.

Quick update, C2 is actually providing a meaningful benefit, I first dropped cores 2,3 to C2, and this was the result. A 1C drop to match core 1.

Code: [Select]
dev.cpu.3.temperature: 36.0C
dev.cpu.2.temperature: 36.0C
dev.cpu.1.temperature: 36.0C
dev.cpu.0.temperature: 36.0C

Then watch what happens when I lock core 1 to C1 only.

Code: [Select]
dev.cpu.3.temperature: 36.0C
dev.cpu.2.temperature: 36.0C
dev.cpu.1.temperature: 39.0C
dev.cpu.0.temperature: 36.0C

and one minute later

Code: [Select]
dev.cpu.3.temperature: 36.0C
dev.cpu.2.temperature: 36.0C
dev.cpu.1.temperature: 40.0C
dev.cpu.0.temperature: 36.0C

However core 0 , even tho spends 99.99% of time in C1 doesnt have temps that high which is interesting, likewise if lock core 0 to C1 it has no affect as its 99.99% of time in that state anyway.
« Last Edit: January 31, 2017, 11:17:08 AM by Chrysalis »
Logged
Sky Fiber Pro - Billion 8800NL bridge & PFSense BOX running PFSense 2.4 - ECI Cab

Chrysalis

  • Content Team
  • Kitizen
  • *
  • Posts: 4804
Re: LAN setup
« Reply #129 on: January 31, 2017, 11:00:48 PM »

Found the cause of the writes, its not pfblockerng, its the rrd graphs generated by pfsense.

in /var/db/rrd is the graphing databases and they are updated every minute, on my system they are 7.2meg in size in total.  Multiply that by 60 minutes and thats 480meg of writes every hour, or 11.5gig every 24 hours. (when uncompressed).

I took the idea from the scripts deployed to backup the traffic stats, so did the following.

1 - temporary disabled graphing.
2 - created a backup script which will run at a set interval, and ran it to create an initial backup.
3 - wiped /var/db/rrd but left the directory in place so can mount on it.
4 - created a small ramdisk and mounted to the location which in my case I chose 200meg which should easily be enough.
5 - created a restore script and ran it to restore the files.
6 - reenabled graphing again.

I will next set a cron entry to run the backup script at intervals, probably once an hour or maybe every 15 minutes.
Also will be the script added to shellcmd to generate ram disk and restore the backup at boot.
Logged
Sky Fiber Pro - Billion 8800NL bridge & PFSense BOX running PFSense 2.4 - ECI Cab

Chrysalis

  • Content Team
  • Kitizen
  • *
  • Posts: 4804
Re: LAN setup
« Reply #130 on: February 16, 2017, 04:18:19 PM »

Ok the SSD I put in my pfsense unit as it turns out has a warranty write limit higher than my much more expensive samsung 512 pro.

The kingston msata ssd 60gig has a 218TB warranty coverage.
Logged
Sky Fiber Pro - Billion 8800NL bridge & PFSense BOX running PFSense 2.4 - ECI Cab

Chrysalis

  • Content Team
  • Kitizen
  • *
  • Posts: 4804
Re: LAN setup
« Reply #131 on: April 19, 2017, 03:36:03 PM »

since i had to take unit apart for it been stuck in hibernation mode after a power cut i took a new pic to show how tight a fit to add the intel i350 nic ports

https://drive.google.com/file/d/0B7P3Ne0hzKcGd3FrWGxDdUhDbkk/view?usp=drivesdk
« Last Edit: April 19, 2017, 03:38:41 PM by Chrysalis »
Logged
Sky Fiber Pro - Billion 8800NL bridge & PFSense BOX running PFSense 2.4 - ECI Cab

Chrysalis

  • Content Team
  • Kitizen
  • *
  • Posts: 4804
Re: LAN setup
« Reply #132 on: April 19, 2017, 03:50:44 PM »

I looked at the on-site images and could not see a VGA port . . . So I downloaded the data sheet. That mentions VGA under the "Graphics" heading but the section under the "Rear I/O" heading states "VGA 0"!

All a bit of a mystery.  :-\
yeah it has a vga header on board but no port on case so needs a attachment
Logged
Sky Fiber Pro - Billion 8800NL bridge & PFSense BOX running PFSense 2.4 - ECI Cab

nallar

  • Member
  • **
  • Posts: 36
    • Smokeping
Re: LAN setup
« Reply #133 on: April 20, 2017, 01:00:04 PM »

You're running 2.4, you could try fq_codel instead of fairq+codel. Should be better.

https://forum.pfsense.org/index.php?topic=126637.0
Logged
Virgin Media cable, A&A and Sky DSL. pfSense router.

Chrysalis

  • Content Team
  • Kitizen
  • *
  • Posts: 4804
Re: LAN setup
« Reply #134 on: April 20, 2017, 02:24:23 PM »

on todo list :)
Logged
Sky Fiber Pro - Billion 8800NL bridge & PFSense BOX running PFSense 2.4 - ECI Cab
Pages: 1 ... 7 8 [9]
 

anything