Latest interface: 0.3.1
Latest system: 010
Jason
Developer

806 posts

Posted on 22 September 2010 @ 06:50edited 25 January 2011 @ 19:32
This thread focuses on Intel motherboards that are compatible with FreeBSD and suitable for filesharing.


Zotac H55-ITX (Socket 1156 - Core i3/i5/i7)

broken image
AnandTech Review(ext) | Zotac product website(ext)

Advantages:
- low power consumption
- small form factor
- 6x full-speed SATA/300 ports
- PCI-express x16 slot



SuperMicro C7SIM-Q (Socket 1156 - Core i3/i5/i7)

broken image
SuperMicro product website(ext)

Advantages:
- low power consumption
- 6x full-speed SATA/300 ports
- three PCI-express slots
- four DIMM slots (up to 16GiB non-ECC DDR3 RAM)
- dual intel gigabit ethernet



SuperMicro X8ST3-F (Socket 1366 - Core i7/Xeon)

broken image
SuperMicro product website(ext)

Advantages:
- very moderate power consumption given its features
- 6x full-speed SATA/300 ports
- 8x full-speed SAS/300 ports via LSI 1068E controller (fully compatible with BSD/Solaris)
- four PCI-express slots
- six DIMM slots (up to 24GiB non-ECC or ECC Unbuffered DDR3 DRAM memory)
- IPMI on dedicated LAN port

Obsolete
User

4 posts

Posted on 18 January 2011 @ 18:29
Note: The SuperMicro X8ST3-F is Socket 1366 NOT socket 1156!
Jason
Developer

806 posts

Posted on 19 January 2011 @ 18:22
Fixed; thanks!
Obsolete
User

4 posts

Posted on 24 January 2011 @ 20:54edited 20:55 54s
I'm orienting myself into buying some hardware, so far the SuperMicro X8ST3-F looks really interesting. I would like to add the following note:

This motherboard also supports ECC memory as long as you use a Xeon processor (it's also noteded on the product website).

[quote]
Memory Capacity
  • Supports up to 24 GB 1333 / 1066 / 800MHz DDR3 ECC / non-ECC Un-Buffered memory


  • Error Detection
  • Corrects single-bit errors

  • Detects double-bit errors (using ECC memory)

  • Supports Intel® x4 and x8 Single Device Data Correction (SDDC)

  • [/quote]
    Jason
    Developer

    806 posts

    Posted on 25 January 2011 @ 19:28edited 19:29 46s
    I recall something about the X58 chipset only using ECC on the command path and not on the data path; any chance that might be correct?

    It's a very promising board though with alot of functionality that if you add it all up is offered at quite a low price if you have to buy it all separately. And you still have 4 PCI-express slots which is great for extensibility.
    Obsolete
    User

    4 posts

    Posted on 26 January 2011 @ 17:06
    Jason, I've never heard that, can't find much about it on internet either.

    What I do know is that the memory controller is integrated in the CPU and that one of the main differences between i7 and Xeon is the ECC support. Ofcourse the BIOS of the motherbord has to support ECC aswell, and I think a certain cpu pin has to be connected for the support as well.

    I would expect that if the manufacturer lists ECC support in their official specs on their website that it is true. Especially since it lists what kind of Error Correction is supported:

    Error Detection
  • Corrects single-bit errors

  • Detects double-bit errors (using ECC memory)

  • Supports Intel® x4 and x8 Single Device Data Correction (SDDC)


  • I don't see why they would list these specs if it's not truly supported.

    Personally I'm still in doubt between a SuperMicro X8ST3-F or a Supermicro X8SIL-F setup.
    Jason
    Developer

    806 posts

    Posted on 27 January 2011 @ 22:10
    The X8SIL-F would have only 4 memory banks. For a serious ZFS setup it might be nice to have 6 banks, so you can install 24GiB of DDR3 memory quite cheaply; 3 pairs of 2x4GiB makes for 3x 65 EUR ~= 200 euro; not a lot of money for 24GiB DDR3! Of course ECC unbuffered is a tad higher; 100 euro for 8GiB (2x4). Not sure how those relate to prices in dollars.

    But since DDR3 memory is extremely cheap now, having 6 memory banks to install more memory would be an interesting plus. I don't think you can go wrong on the X8ST3-F; it's got everything for a great ZFS setup!
    czhao
    User

    2 posts

    Posted on 8 February 2011 @ 21:08
    I'm very interested in supermicro X8ST3-F board. Does anyone have any power consumption figures?
    cpny
    User

    18 posts

    Posted on 23 December 2011 @ 17:44edited 17:58 56s
    Quick question if anyone can answer , i like the 1336 socket so was thinking of reusing my cpu for the SuperMicro X8ST3-F , it says it has 8 sas ports , are those 1 to 1 ports or 1 to 4 ports such as with the super micro controller card and ibm m1015 etc ?

    Cheers

    Also is 24gb ram enough for zfs to perform without a bottleneck ? (dedup not included)
    danswartz
    User

    252 posts

    Posted on 23 December 2011 @ 20:55
    4GB is more than enough! Anything more (non-dedup) is only used as ARC (read cache.) Some usages don't really benefit from cache anyway, but 24 is way more than enough :)
    cpny
    User

    18 posts

    Posted on 23 December 2011 @ 22:38
    Glad to hear danswartz , thank you very much for your help today :)
    Bisshop
    User

    9 posts

    Posted on 4 September 2012 @ 07:11
    Hey,

    I'm busy putting together a new server and got convinced to give ZFS a real try. So thinking about buying this SuperMicro X8ST3-F board, can't find anything comparable in my local shops. Any similar Mobo worth comparing that isn't in this thread?
    I read alot about lots of RAM needed.
    But how important is processing power for ZFSGuru? What kind of CPU would you advise to put into this?
    CiPHER
    Developer

    1199 posts

    Posted on 4 September 2012 @ 07:58
    Bisshop: normal modern motherboards support 8GiB UDIMM memory. This means you can build a 32GiB RAM system for cheap. I recommend this instead of an expensive server build. The boards listed in this thread are nice, but dated. Something newer is more power efficient, cheaper and gets you the same or better performance.

    You don't need much CPU power, but you do want lots of memory and good PCI-express expansion capability. For example, you may want 1 or 2 add-on controllers (IBM M1015) plus one 10 gigabit network controller. That means you already need three PCI-express x8 slots.

    If you are going to use virtualization, you may want vt-d CPU support. That means you have to grab an Intel Core i5 series processor that is already fairly expensive.

    ZFS likes lots of memory and lots of CPU cores. So a slow quadcore is much better than a higher clocked dualcore.
    Bisshop
    User

    9 posts

    Posted on 4 September 2012 @ 08:44edited 08:50 32s
    Is it just my local shop or are MoBos with more then 2 PCI x8 slots (downgraded x16) rare?
    Best I can find are 3X16s of which 2 work at x8 and 1x4.
    I'm mostly looking at ASUS boards at the moment.
    Just looking in the wrong place(http://www.tones.be/producten/moederbord-intel-voor-socket-1155) or what?

    Edit: Gonna readup on Servethehome (cool site, didnt know it))and come back with some more detailed questions later :p
    CiPHER
    Developer

    1199 posts

    Posted on 4 September 2012 @ 09:07
    You want to read this guide if you own a IBM M1015 controller:
    http://www.servethehome.com/ibm-serveraid-m1015-part-4/
    It tells you how to flash the firmware to LSI IT-mode firmware.

    You can insert a PCI-express x8 controller in a PCI-express x16 slot. In fact, normally any combination should work. So if you have a PCI-express x4 slot that has an 'open end' and you can make your x8 controller physically fit in there, it should work!

    But BIOS problems can lead to problems, and some RAID controllers use bridgechips that can cause problems too. In particular, you want a motherboard with Interrupt 19 support. Some boards have a BIOS option to enable to disable this. If enabled, it allows the firmware of a controller to initialize itself during the POST phase: Power-on-Self-Test.

    If you don't need more than 8+6=14 disks to connect, a Micro-ATX board with 2x PCI-express x16 slots (working as x8 when used both) may be sufficient, since this allows you to connect a 10 gigabit card in the future. Or you can add a second controller instead. There are 16-port controllers available (4 mini-SAS connectors) but those are much more expensive than an IBM M1015 which goes for just 115 euro over here.
    Pantagruel
    User

    45 posts

    Posted on 4 September 2012 @ 09:13edited 5 September 2012 @ 06:50
    Bisshop
    User

    9 posts

    Posted on 4 September 2012 @ 09:25edited 09:27 36s
    What shop is "over here" cause mine seems around triple that price for a IMBM1015 card :)
    CiPHER
    Developer

    1199 posts

    Posted on 4 September 2012 @ 11:27
    You are from Belgium? If so, can't you order from a Dutch webshop instead?

    See: http://tweakers.net/pricewatch/291869/ibm-serveraid-m1015-sas-sata-controller-for-system-x.html
    Pantagruel
    User

    45 posts

    Posted on 5 September 2012 @ 06:48edited 06:51 58s
    I don't get it, posted a comment yesterday which remained invisible. Tried again today and still no comment visible :( .
    Even stranger I can edit my previous comment, but this changes nothing regarding the comment being visible or not.
    When I copy/paste in the comment into this one, it also becomes invisible.
    Pantagruel
    User

    45 posts

    Posted on 5 September 2012 @ 06:55edited 06:55 39s
    On-topic, basically motherboard with multiple full-lane PCIe x16 slots do excist, they use a PLX chip to multi/distribute the available PCIe lanes to different slots. Those type of boards do not come cheap due to the added plx chip (allegedly costing $40 a piece such a PLX chip) and are mainly directed at work station use (AsRock X79 Extreme 11 for instance, approx $600, you do however get dual 1Gbit NIC and 8 channel LSI SAS onboard).
    CiPHER
    Developer

    1199 posts

    Posted on 5 September 2012 @ 06:59edited 07:03 54s
    Pantagruel wrote: Motherboards with more than two PCIe x8 slots are available but will set you back some serious dosh.
    The ASRock X79 Extreme11 for instance costs a jaw dropping $600 but has 72 PCIe 3.0 lanes available due to a pair of PLX PEX 8747 chips. The value added bonus is the presence of an 8 channel SAS/SATA chip from LSI.
    This was your comment? The +- sign was the problem, i removed it. :P

    But i am not a fan of PLX. In fact, i hate those solutions. It raises power consumption and only gives you 'fake' bandwidth. It lowers the performance of all I/O because all I/O will have additional latency. The only real advantage is that it can assign bandwidth flexibly. So having seven PCIe slots will all be able to have high bandwidth unless used at the same time.

    But with ZFS, you are using the disks at the same time. Latency is key here, so you want as low latency as possible. That means not using PLX chips is the best solution; just make sure you have enough 'genuine' bandwidth for your storage!

    The simplest systems should have this, since each CPU since SNB comes with 16 lanes, this means twee PCI-express slots each able to deliver 8 lanes. So room for two controllers, or one controller plus one 10 gigabit network adapter. If you need more PCI-express lanes, i suggest a chipset that can natively deliver those. Like server boards and chipsets.
    Bisshop
    User

    9 posts

    Posted on 1 October 2012 @ 09:00edited 09:25 27s
    Bisshop
    User

    9 posts

    Posted on 1 October 2012 @ 09:26edited 09:27 03s
    jokul
    User

    2 posts

    Posted on 28 May 2018 @ 10:21
    A general discussion is carried out for the smooth for the individuals. All the charts have been ensured with the help of the story writer and essay on time for the full use of the determination of the tints and ideas for the individuals.
    Last Page

    Valid XHTML 1.1