The Spyder
2[H]4U
- Joined
- Jun 18, 2002
- Messages
- 2,628
Ahhhh something I had not even thought of. Oh well, my second 1068E will be here in a few days and that will solve that. Thanks!
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
vmdirectpath doesn't work with sata contollers that are integrated into SB/NB. So that's problem you cannot fix.
Not a true statement at all. Directpath works fine with the built in ICH10 controller on that motherboard. It is configured on the system sitting right beside me right now that I am testing with.
Unfortunately, I don't know what is causing your problem.
Sep 11 23:17:26 fileserver smbsrv: [ID 421734 kern.notice] NOTICE: [NT Authority\Anonymous]: media access denied: IPC only
Sep 11 23:24:00 fileserver last message repeated 1557 times
Sep 11 23:24:04 fileserver smbsrv: [ID 421734 kern.notice] NOTICE: [NT Authority\Anonymous]: media access denied: IPC only
Sep 11 23:30:41 fileserver last message repeated 1555 times
Sep 11 23:30:45 fileserver smbsrv: [ID 421734 kern.notice] NOTICE: [NT Authority\Anonymous]: media access denied: IPC only
Sep 11 23:37:07 fileserver last message repeated 1515 times
Sep 12 07:04:07 fileserver ahci: [ID 517647 kern.warning] WARNING: ahci0: watchdog port 3 satapkt 0xffffff01cd800cb8 timed out
Sep 12 07:05:52 fileserver ahci: [ID 517647 kern.warning] WARNING: ahci0: watchdog port 3 satapkt 0xffffff01cd845e90 timed out
Sep 12 07:05:52 fileserver ahci: [ID 517647 kern.warning] WARNING: ahci0: watchdog port 3 satapkt 0xffffff01cc9a7640 timed out
Sep 12 08:39:08 fileserver ahci: [ID 777486 kern.warning] WARNING: ahci0: ahci port 3 has interface fatal error
Sep 12 08:39:08 fileserver ahci: [ID 687168 kern.warning] WARNING: ahci0: ahci port 3 is trying to do error recovery
Sep 12 08:39:08 fileserver ahci: [ID 551337 kern.warning] WARNING: ahci0: Transient Data Integrity Error (T)
Sep 12 08:39:08 fileserver Internal Error (E)
Sep 12 08:39:08 fileserver CRC Error (C)
Sep 12 08:39:08 fileserver ahci: [ID 657156 kern.warning] WARNING: ahci0: error recovery for port 3 succeed
Sep 12 09:09:03 fileserver ahci: [ID 777486 kern.warning] WARNING: ahci0: ahci port 3 has interface fatal error
Sep 12 09:09:03 fileserver ahci: [ID 687168 kern.warning] WARNING: ahci0: ahci port 3 is trying to do error recovery
Sep 12 09:09:03 fileserver ahci: [ID 551337 kern.warning] WARNING: ahci0: Transient Data Integrity Error (T)
Sep 12 09:09:03 fileserver Internal Error (E)
Sep 12 09:09:03 fileserver CRC Error (C)
Well, sure, but that's a different issue altogether. The sata ports are all associated with a single controller, so they go as one. It can be a bit of a mystery as to why some controllers can work in passthru and others not - sometimes it seems the only viable thing is to try
How can I tell which port is port 3? I have some drives connected to the motherboard and other drives connected to an LSI card. However, I checked ALL of the wiring and it looks fine.Have you checked the wiring/cables at both ends on port 3?
I finally got everything working, but was disappointed by the speeds so far. On a 2k8r2 VM (running on the same host), CDM is only showing ~58mb/s sequential reads and 200mb/s sequential writes. This is using the e1000 network adapters. To my local machine (a building away, still gigabit- but through 5 switches total), I am seeing 38mb/s read and 98mb/s write.
[/URL]
When testing the smb transfer between Solaris/OI and Windows using CDM, I always get ~40MB/s read and ~100MB/s write.
But when using the Windows built-in command - robocopy or just Explorer, I can get 100~110MB/s on both read and write between Solaris and Windows (the network utilization in the windows task manager is about 95%~99%).
I can get ~100MB/s r/w without any issues when using CDM to benchmark Win-to-Win.
So, the conclusion is CDM doesn't work well with Solaris/OI smb? Could someone please verify this?
Well... I think I found my problem:
Code:HARDWARE IMPENDING FAILURE GENERAL HARD DRIVE FAILURE [asc=5d, ascq=10]
Code:Error: S:4 H:310 T:0
That's two Hitachi drives that are dead within about two months... I'm not that thrilled so far.
Is it possible for napp-it to e-mail me when SMART reports a hard drive is failing?
If you read a few posts above you'll see that ZFS reported my pool as being perfectly fine but I manually checked the SMART info and it says that a hard drive is "IMPENDING HARDWARE FAILURE".
It would have been nice to receive an e-mail notification so I knew ahead of time, etc...
OK i need help from someone familiar with solaris networking.
I'm trying to connect my zfs nas to 2 vmware hosts through 10 gig ethernet but without using a switch
I have a 2 port 10gig card on the nas and single port cards on each vmware server. I figured it would be simple just create a bridge on zfs nas (solaris) and it would work. Problem is after I use dladm to create a bridge between ixgbe0 and ixgbe1 i can't assign an ip to the bridge device (bridge0).
Any help would be appreciated.
I'm confused. I thought he was talking about the Solaris box bridging two 10gig nics - the Vmware hosts are just clients - they are not involved in bridging in any way, no?
Ah, got it. I assumed he was talking about an "all-in-one" ZFS NAS. I should really read more carefully before I answer!
He should certainly be able to do this on his Solaris ZFS host. Not sure why it isn't working.
I'm not expert but maybe try this?
Set static IP on NIC1
Bridge NIC2 to NIC1
Plug a pc into NIC2 and see if you can ping NIC1's IP.
Could be wrong but I don't think you configure IP's on bridge's, even in pfsense. I'm going to test this in a vm as soon as I get a chance and edit this post. Please keep us updated on your progress because I'm working on a solution for work that this option would come in handy with.
EDIT: pfSense does allow for setting an IP on a bridge.
EDIT2: What I was trying to get at is maybe the NIC's, even though as part of a bridge, have to be configured individually.
what is the client? win7? i seem to recall multiple threads with people reporting slow reads with CIFS.