:: Linux HDD farm build, + SAMBA to windows folder sharing


this entry is a very rough collection of procedures and thoughts in building my new HDD farm. this is not my first ghetto build. my first build was 20years ago with IBM SCSI-3. those suckers are hot and somewhat flimsy as my environment has no temperature control. that was a very bad move with my lack of experience and research :(. this build is an expansion from 2 DLINK NAS which have very badly designed thermal management mechanics. i think ALL smaller cases have this problem as long as they have a casing attached to the HDD and the casing sits inside another casing.
the next problem of DLINK NAS is their transfer speed. my highest clocked rate is about 26mb/s. the idea is to build a low power mini ITX "farm" and expand the SATA ports. with any lowbie SATA port, i bet i can easily achieve 100mb/s.

and of course finally, to keep data as it is. after reading some articles newish to me. i have to agree, that RAID is not a backup. it does not really protect your data. and with the discovery of SNAPRAID, it does pale in comparison when it comes to flexibility.

there has always been a demand to want to up the storage integrity level i have had. and there is too many options. this thread in hardforum signifies a journey which i think many have traveled down before. http://hardforum.com/showthread.php?t=1749919

however, right in the middle of it, these options were suggested by user [COOL1NET6]

http://www.flexraid.com/
http://snapraid.sourceforge.net/
http://lime-technology.com/
http://stablebit.com/
http://www.drivebender.com/

it was a refreshing turn

and the thread then later lead to this chart, which is very interesting. the article comes from [http://snapraid.sourceforge.net/compare.html]. now snapraid being a GPL3 level software, have indeed offered very very tasty insights into current state of HDD storage options (especially on itself).

and more importantly, most new linux users encounter a high initial learning curve, diving into the forum channel we can see that, most thread headers are initial [shouts] for help, which ALL ended with solved (although the thread head wasnt fixed to reflect [solved]) ... so the only conclusion i can draw from reading the threads is that, apart from software/GUI related initial problems, it seems this is to be the best format that gels up hardware limitations AND software limitations. best part of the software is that it is HDD interface independant. you could have a mix of SCSI, SATA, etc

previously i have had SCSI3 HDD farms for my video raid. raid back in the days are PITA, all HDD have to be matching pieces. hardware ASICs are limited to a few. that was the era when large reputable brand giants are the only speaking authority (that was way before year 1990s). heat has been and always will be the number 1 HDD killer. but i am happy today i have migrated to something else.

before i dive more into SNAPRAID, below are some messy stuff i have collected in my exploration...

this link is the best article i have seen on how to partition linux/ZFS (of course im not very NUXY yet) --> http://www.funtoo.org/ZFS_Install_Guide

and this FUNTOO rescue disk is the rescue booter, WITH ZFS ! --> http://ftp.osuosl.org/pub/funtoo/distfiles/sysresccd/sysresccd-4.2.0_zfs_0.6.3.iso

the best description of a basic LINUX boot HDD partitioning, just like how we used to muck around with MSDOS floppy disks, format a:\ ... etc etc is found here from MINT forums http://forums.linuxmint.com/viewtopic.php?t=122276

basically, at point of a fresh install ...
you need to create something like this

1) 250-500Mb : /boot (eg : 384mB)
2) =RAM size : [SWAP] (eg : 16Gb)
3) =OS size :/[root] (eg : 32Gb)
4) =leftover :/home (eg : 1Tb)
5) =100Mb :[EFI boot] (eg : 100Mb)


and the partitioning for ZFS (if using the FUNTOO ZFS native booter)
and then
# mkfs.ext2 -m 1 /dev/sda1
this formats the ZFS HDD
the other notable Partitioning utlity that is self booting is GPARTED (live version).
http://gparted.org/download.php
GPARTED live can be booted from USB (a 230Mb + download). it is a debian based utility.

for certain more advanced machines that carry the option of being bootable from USB direct, the windows utility "Win32 Disk Imager" is very very useful. however, due to the low level writing style, it will require DISK management utility to format the USB stick to return the stick to it original capacity after an IMAGE write sometimes.

as with initial mucking arounds, PC-BSD (ZFS) is not for seasoned Windows/MacOS users (it is something like a trial OS built with very little serious operational use, but great for experimentation). the OS is fairly new and is not suitable for regular heavy duty office processing uses. and so i have to try FREENAS too (for ZFS purposing)

so after endless days of trying to get a natively stable ZFS install to go onto my QC5000 A4 setup. i have to say i conceded to the option of [SNAPRAID]. the limited threads that talked about the traditional mindset of hardware RAID5, 50, 60, 10, etc have to be changed into highly adaptable softraid hybrids like SNAPRAID. it is so true when they say, the reason to RAID up your HDD is because of 1) error correction 2) data protection, and there is no reason to doubt the fact that hardware raid does not really fully provide 100% protection. SNAPRAID also does not provide 100% protection, but the way i found it work around the shortcomings of hardware raid proves that the era of hardware raid is truly over.

so i started to test various linux with it and the QC5000

test run 1, [SNAPRAID] on XUBUNTU 14.04.2 --> make success, install success (no tests)

 
test run 2, [SNAPRAID] on UBUNTU 14.04.2 --> make success, install success (no tests)

being a GUI, it is unable to open terminal inside directory like LUBUNTU. to do that you must install this

sudo apt-get install nautilus-open-terminal

test run 3, [SNAPRAID] on LUBUNTU 14.04.2 --> make success, install success, desktop crashes after idling for over 1hr

test run 4, [SNAPRAID] on DEBIAN LXDE 7.8 --> make success, install success. simpler OS GUI seems to be more responsive than all UBUNTUs tried. driver problem detected with RTL8168, BUT it still manages to connect to router and internet on some generic driver mode (= no major problems). whoever made debian is slightly more human than the rest. upon reboot, the entire desktop is where you left it.

(debian GPART map : sda1 = boot 90% of HDD, extended sda2 10% of HDD, logical inside of extended = linux swap. so much more easier than all the rest. but this is done in simplified mode). synaptic package in debian response is nearly instant (almost no hourglassing wait time). under taskmanager, this stock install only uses 165mb of ram. at bootup menu, the DEBIAN boot selection have a memtest function (which was rec in the snapraid FAQ iircvnc)

in a slight fluke, the debian was thought to have installed successfully, but actually was missing updates/upgrades. so it meant that the stock LXDE debian could take on the SNAPraid install w/o going thru updates first.

test run 4, [SNAPRAID] on MINT XFCE 17 --> make success, install success, desktop speed is slow like ubuntu. shutdown error, entire OS stalls
add in [yourusername] under the ROOT line, copy over the line with ALL bla bla bla. ctrl X, save ... thats it.

sudo chown -R yourusername: /media/mountpoint

this is the magic word to say to HDD folders ... DO IT DO IT !!!

Windows to LINUX VNC remote desktop?

server : tkx11vnc --> set listen

client : vncviewer --> activate connection

Debian 7.8 XFCE AMD64 is the fastest of the latest linux i have been testing fo far, it beats the LUBUNTU hands down. but it has a big problem with SUDO commands. so for all attempts and what nots, at login just use user = ROOT. for some reason XFCE desktop is more responsive than LXDE

so the best guide yet comes from youtube. how to change non-ROOT user to have ROOT powers :

#su -

#nano /etc/sudoers

Finally after seeing very nice performance of a DEBIAN platform. i started the SNAPRAID trial tests.

the final [CONF] file :
parity /[path]/[path_etc]/[filename]
content /[path]/[path_etc]/[filename] <-- you need minimum 2 in 2 different places
content /[path]/[path_etc]/[filename]
disk d1 /[path]/[path_etc]/
disk d2 /[path]/[path_etc]/
exclude [name].[name] --> files not to be "parity-ed"
exclude [name].[name]
exclude [name].[name]
include [name].[name] --> files to be "parity-ed"
include [name].[name]
blocksize [128/256/512 etc]
autosave [in x Gb increments]
pool /[path]/[path_etc]/

the steps to building up the DEBIAN install is pretty standard

1) grab the debian image
2) grab Win32 image writer
3) write the debian iso to a USB
4) of course use a system that could boot from USB (or DVD/CD)
5) use ROOT login, update/upgrade the debian
6) grab the SNAPRAID package, untar into a spare folder
7) cd into folder, [./configure] it
8) [make] it, short coffee break
9) [make check] it, long coffee break
10) [make install] it, in 1/2 a jiffy
11) edit [CONF] file, chuck it into the [etc] directory. make the [CONF] file so that it can test a small directory of files say 100Mb
12) [snapraid sync] it as a test run

my 471Gb test run finished in 57m15s. size of content file1 and 2 each is 250Mb.
ambient temp 29deg, HDD temp 36-38 (HGST), WD 42deg. start speed about 158mb/s, final speed 122mb/s. final parity file size 495Gb. this being a 1 HDD to 1HDD hybrid backup test, the parity file was not expected to be smaller. however, with more extended HDDs involved, i would probably see the parity file maintain this or a size parallel to the largest HDD in use.

**update 27th March 2015


A new color shade which exposes all hot regions


NIDEC high power fan (taken with flash)


same NIDEC high power fan (taken without flash)


High density HDD stacking. space inbetween is
about 1cm. bottom sink is temporary. the top 3
are HGST the bottom HDD is a old WD


side view. at this moment the stack is only supported by 2
brass strips drilled with predetermined holes. at this moment,
additional heatsinks are on their way. the flat format sinks are
acquired to be attached to the HDD directly to increase

dissipation from the body. i completely missed the perspective
that i could use brass strips to construct a HDD cage. it is
solderable, easy to drill, conducts heat and can be bent easily
(and they are quite cheap on Taobao)


We can see the inner surfaces are somewhat going at a much higher temperature. it is actually the reflected temperature from the PCB below it due to ite shiny top casing. based on smartmontools, all 3 HGST are running at approx 36˚C. we will further test this when the full array is up and with 2 high power NIDEC fans with the flat pack heatsinks added to the sides and possibly the PCB as well if space permits.






and after so much switching around of trying different linux OS. i went back to XUBUNTU
mainly due to the compatibility with the newish AMD A5000.


fully loading file transfer capabilities with lotsa files. the top status is the file move from
DLINK NAS (with jumbo frames =9000) to HGST via ethernet. the bottom are some
internal HGST to HGST copy. it is really going to be a very long day ahead ... cos there are
5 more loads to transfer. this files transfer is using nautilus, and it has a serious bug. the
file copy stops and misses sync frequently in XUBUNTU. however it is somewhat flawless
in debian.



while doing the file transfers, i discovered
a very interesting bug. if you bar the DLINK

NAS from connecting to the internet, it is
able to transfer files 10-20% faster. why is
that?


DLINK NAS 343 traits : jumbo frames on @9000, ASUS RT56U jumbo frames on @16000. internet to NAS blocked from RT56U. NAS LLTD = OFF. firmware (NAS and interface) = 1.05, hw ver B. for what it is worth, the NAS fails to turn on its own internal fans even after hitting 41oC. :( max connected transfer rate 27.5mb/s .... so far. on the model 320, surprisingly, it managed to do a peak rate of 38-40mb/s.

update :: DEBIAN samba adventures, finally to link up samba to windows.

new tests, clean install debian 7.8 XFCE (4.8)

login root, run thru the usual apt-get update + upgrades, then here comes the samba server/client install

# apt-get install samba
# apt-get install samba-client

edit samba config file

# nano /etc/samba/smb.conf
. . . scroll down to
[global]
workgroup = <<your actual workgroup name>>
[homes]
read only = no
[<<your share name tag>>]
read only = no
locking = no
path = <<actual path of share folder, ie /media/sharedrive>>
guest ok = no

save the conf file (ctrl-x, etc)

add user(s) accessing the samba. these users must be already be part of user base in current debian OS setup. ie : add users to debian AND add again the set of users to samba. samba userlist is a subset of debian user list, so you must add first primarily to debian's, then after that to samba.

if not use
#adduser [username1]
# (enter password)

to force using odd names,
#adduser --force-badname [username2]
# (enter password for username2)
# smbpasswd - a <<username3, ie : root>>
# (samba asks for password entry for username3)

rinse and repeat
check samba user access list

# pdbedit -w -L
all ok? restart samba.

# /etc/init.d/samba restart

go into sharing folder, make sure [usernames] can have rights to read and write by using (otherwise, when added on windows side, you cannot copy INTO the folder)
# chown -R [username]: /[sharepath]/*

once done, do a test network folder add from windows side using the normal map network drive feature. the process should prompt for [username]+[password] to access the samba shared folder. try and copy something into it ... and wala !!!
additional fstab edits needed to automount samba share resources.

small quirk in debian, while GPARTED is running and nautilus is filemanager, nautilus will not be able to mount drives if GPARTED is also running at the same time (GPARTED have to be closed)

structure of fstab pointers, specifically for boot up auto-mount. there seem to be a glitch whereby if a partition is already mounted in Gparted, it wont automount after reboot using fstab, so all auto-mount targets need to be un-mounted from Gpart.

AAA/BBB = Gparted partition description (eg : /dev/sda1)
[path]/[path] = target path description (eg: /media/drive1)
CCCC = file system (eg: NTFS, ext3, ext4 )
[option],[option] = options description(eg: rw, user, auto)
[dump] = why must they call it a dump? (0 / 1)
[pass] = why must they call it a pass? (0 / 1 / 2)

http://en.wikipedia.org/wiki/Fstab

/dev/sdc1 /media/DATA ext4 rw,user,auto 0 2

[/media/DATA] needs to be an actual user created empty folder as a "place holder" sitting inside the [media] folder.
after auto-mount success, assuming the aim is to be able to port drives to be accessible on windows (and all folders are already listed inside smb.conf file). it is important the windows [username] or computer-name opening the drives be treated with a [chown -R] so that the drives will be allowed to have R/W access from the windows side.

after a few rounds of install, uninstall, on/off, on/off , on/off, on/off , on/off, on/off, on/off, on/off, on/off, on/off ... a mount error finally occurred which requires a chkdsk

# /sbin/fsck /dev/sdx -y -p (the "y" is putting a yes to fix everything, the "p" will popup a query for a dangerous fix)

or in rare cases DD bomb it

# dd if=/dev/zero of=/dev/sdx bs = 64M obs=64M

or to write off the ends of the HDD, get the number of blocks/sectors of the HDD, skip to the last 99th % of the sectors (where xxxxxx = sector number to start zeroing). @ about 170Mb/s ... it will take 6seconds to write 1Gb of zero.

# dd if=/dev/zero of=/dev/sdx bs = 64M obs=64M skip=xxxxxxx


number of physical sectors = 7,814,037,168


976,754,646 blocks total?

 





















 

combining all the HDD into a DIY
HDD cage. made of brass strips.

































same stack of drives under FLIR. 4 powered, 3 in full use. 1 at the top 1 in the middle and 2 at the bottom of the stack. it is strange that smartmon reports internal temperature to be 31˚C, but FLIR is seeing peaks of 40˚C. is that the reason why HDD fail because smartmon reports too low a temperature?


after ripping out the old drives from the DLINK NAS, here we have a linux screencap of the read/write bench test using DISK UTILITY 3.02. on 1 of the "broken" HDD (an example piece, 2.5" 160Gb old notebook drive). from the weird lines in the middle we can safely say there is a bad sector problem of some sort.


by using [parted] -l, listing the HDD sector/block size (in this case is 512/512)
we can then use badblock to "grind" up the HDD


# badblocks -b 512 -c 131072 -p 5 -o BBRR -w -s - v /dev/xxx


[-b 512] corresponds to the block size, so that the output file [-o BBRR] will contain the correct LBA list for correction.
[-c 4096] specifies the rate of number of blocks to test at (higher should be faster). @ 131072, it will take up about 139Mb of memory to test.
[-p 5] specifies the number of times to run the test
[-w] specifies a write and read back test
[-s] specifies showing the realtime test progress
[-v] verbose mode
[/dev/xxx] the target drive of course




@ 2TB size 25% scan took 1 hour ! ... its going to be a long long haul


the full 10 HDD cage rough assembly. material cost = under S$40.
For people who are interested to get some custom made, you can email me



and populated with 8 drives, to reduce amount of materials used, there is only 2 support stems
which are diagonal to each other for the top few
drives. due to inaccuracies in drilling of the HDD
mounting holes, the diagonal setup will be more
forgiving when holes are somewhat slightly out
of alignment. the base strips require 4 holes
and a more rigid alignment of holes.


each of these brass strips costs about S$5
bloody expensive. and the real hardware
shops do not sell loose cut pieces unless you buy
the entire 8metres of it LOLz!


the resultant setup is surprisingly easy to do
i should dream more and try more things in
order to discover more stupidly easy setups
like this to solve and expand on other projects.



there is then also the problem of noise in the
power supplies. which shall be addressed by
adding additional caps to the 3.3/5/12v rails.









parity file format note to self
# mkfs.ext4 -m 0 -T largefile4 DEVICE
 
 
 
[mkfs.ext4], using ext4 option, [-V] verbose, [-c] badblock check before build
[/dev/xxx] location, [-m 0] set reserve block to zero, [-T largefile4] set to largefile4
[-L xxx] volume label,
 
http://linux.die.net/man/8/mkfs.ext4
http://man.gnu.org.ua/manpage/?5+/etc/mke2fs.conf 

after GPARTing a drive say sdd (creating sdd1), use :mkfs.ext4 -m 0 -T largefile4 /dev/sdd1 -L xxx
 

and the last 2 HDD taken from another DLINK goes in.


 
nearly fills the entire height.
no rocket science involved.
crude but effective space optimizing.

for some reason, i am unable to find the specs
of the fan i am using.
NIDEC beta SL D09B-12PLH
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
last 2 drives are WD 3TB. but with vastly different character
last year and this year with some very nice data presented by backblaze. 
we know that WD seagate drives are the worse lot. and HGST are the best.
 
new trick for running dd : 
(in terminal window 1) # dd *bla bla bla bla command
(in terminal window 2) # watch -n5 'sudo kill -USR1 $(pgrep ^dd)'
the watch command will prompt the dd window to provide a update every n=5 seconds 
 
 
**update** 

i just recieved my new (and cheap) SATA PCIE (single lane) expansion card with 4 ports (rated
for SATA 6Gbps. the additional SATA card boots up displaying a ASMEDIA bios, not marvell? 
and that adds about 1 second to the boot cycle, very fast, very nice! (i can help anyone to buy this
card if they need. reminder that this is a x1 PCIE speed. approx price with shipping to almost
anywhere worldwide is approx USD55)
 
as with ASMEDIA bios SATA expansion cards, the OS sees the drives as if the SATA expansion is 
part of the motherboard, there is no need to install any drivers or setup any bios/jumpers.
unbox, plug in = works ! smooth ride for mini ITX setup in LINUX.

the down side is the 2nd card, a mini PCIE express x1 slot which replaces the mini WIFI card. the
mini SATA card which came in is too big ( it was expected to happen). this card is also ASMEDIA.
and so i will need to wait for a mini PCIE extender. this mPCIE 2 port expansion if anybody is
interested is USD45 (nearly worldwide shipping included, please email to ask). the extender is not
cheap, plug card + ribbon and receptor card will cost about USD40 for those who wants to buy.
 
 
with all the HDD powered up (but only 8 with SATA connection). the mini voltage readers and cable temperature suggests that i will need to mod the power supply distribution system further. currently, the distribution module for the top 6 drives are from a small sub module with some extra capacitors.

and this is the little guy that is too big for the mPCIE space.







Comments

Popular Posts